Python tensorflow實現(xiàn)mnist手寫數(shù)字識別示例【非卷積與卷積實現(xiàn)】
本文實例講述了Python tensorflow實現(xiàn)mnist手寫數(shù)字識別。分享給大家供大家參考,具體如下:
非卷積實現(xiàn)
import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data data_path = 'F:\CNN\data\mnist' mnist_data = input_data.read_data_sets(data_path,one_hot=True) #offline dataset x_data = tf.placeholder("float32", [None, 784]) # None means we can import any number of images weight = tf.Variable(tf.ones([784,10])) bias = tf.Variable(tf.ones([10])) Y_model = tf.nn.softmax(tf.matmul(x_data ,weight) + bias) #Y_model = tf.nn.sigmoid(tf.matmul(x_data ,weight) + bias) ''' weight1 = tf.Variable(tf.ones([784,256])) bias1 = tf.Variable(tf.ones([256])) Y_model1 = tf.nn.softmax(tf.matmul(x_data ,weight1) + bias1) weight1 = tf.Variable(tf.ones([256,10])) bias1 = tf.Variable(tf.ones([10])) Y_model = tf.nn.softmax(tf.matmul(Y_model1 ,weight1) + bias1) ''' y_data = tf.placeholder("float32", [None, 10]) loss = tf.reduce_sum(tf.pow((y_data - Y_model), 2 ))#92%-93% #loss = tf.reduce_sum(tf.square(y_data - Y_model)) #90%-91% optimizer = tf.train.GradientDescentOptimizer(0.01) train = optimizer.minimize(loss) init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) # reset values to wrong for i in range(100000): batch_xs, batch_ys = mnist_data.train.next_batch(50) sess.run(train, feed_dict = {x_data: batch_xs, y_data: batch_ys}) if i%50==0: correct_predict = tf.equal(tf.arg_max(Y_model,1),tf.argmax(y_data,1)) accurate = tf.reduce_mean(tf.cast(correct_predict,"float")) print(sess.run(accurate,feed_dict={x_data:mnist_data.test.images,y_data:mnist_data.test.labels}))
卷積實現(xiàn)
import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data data_path = 'F:\CNN\data\mnist' mnist_data = input_data.read_data_sets(data_path,one_hot=True) #offline dataset x_data = tf.placeholder("float32", [None, 784]) # None means we can import any number of images x_image = tf.reshape(x_data, [-1,28,28,1]) w_conv = tf.Variable(tf.ones([5,5,1,32])) #weight b_conv = tf.Variable(tf.ones([32])) #bias h_conv = tf.nn.relu(tf.nn.conv2d(x_image , w_conv,strides=[1,1,1,1],padding='SAME')+ b_conv) h_pool = tf.nn.max_pool(h_conv,ksize=[1,2,2,1],strides=[1,2,2,1],padding='SAME') w_fc = tf.Variable(tf.ones([14*14*32,1024])) b_fc = tf.Variable(tf.ones([1024])) h_pool_flat = tf.reshape(h_pool,[-1,14*14*32]) h_fc = tf.nn.relu(tf.matmul(h_pool_flat,w_fc) +b_fc) W_fc = w_fc = tf.Variable(tf.ones([1024,10])) B_fc = tf.Variable(tf.ones([10])) Y_model = tf.nn.softmax(tf.matmul(h_fc,W_fc) +B_fc) y_data = tf.placeholder("float32",[None,10]) loss = -tf.reduce_sum(y_data * tf.log(Y_model)) train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss) init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) for i in range(1000): batch_xs,batch_ys =mnist_data.train.next_batch(5) sess.run(train_step,feed_dict={x_data:batch_xs,y_data:batch_ys}) if i%50==0: correct_prediction = tf.equal(tf.argmax(Y_model,1),tf.argmax(y_data,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction,"float")) print(sess.run(accuracy,feed_dict={x_data:mnist_data.test.images,y_data:mnist_data.test.labels}))
更多關(guān)于Python相關(guān)內(nèi)容可查看本站專題:《Python數(shù)學(xué)運算技巧總結(jié)》、《Python圖片操作技巧總結(jié)》、《Python數(shù)據(jù)結(jié)構(gòu)與算法教程》、《Python函數(shù)使用技巧總結(jié)》、《Python字符串操作技巧匯總》及《Python入門與進階經(jīng)典教程》
希望本文所述對大家Python程序設(shè)計有所幫助。
相關(guān)文章
Python內(nèi)置函數(shù)input()示例詳解
input()函數(shù)是Python中用于獲取用戶輸入的一個簡單而強大的工具,它在創(chuàng)建需要用戶交互的程序時非常有用,這篇文章主要介紹了Python內(nèi)置函數(shù)input()詳解,需要的朋友可以參考下2024-04-04好的Python培訓(xùn)機構(gòu)應(yīng)該具備哪些條件
python是現(xiàn)在開發(fā)的熱潮,大家應(yīng)該如何學(xué)習(xí)呢?許多人選擇自學(xué),還有人會選擇去培訓(xùn)結(jié)構(gòu)學(xué)習(xí),那么好的培訓(xùn)機構(gòu)的標(biāo)準(zhǔn)是什么樣的呢?下面跟隨腳本之家小編一起通過本文學(xué)習(xí)吧2018-05-05Python 實現(xiàn)LeNet網(wǎng)絡(luò)模型的訓(xùn)練及預(yù)測
本文將為大家詳細(xì)講解如何使用CIFR10數(shù)據(jù)集訓(xùn)練模型以及用訓(xùn)練好的模型做預(yù)測。代碼具有一定價值,感興趣的小伙伴可以學(xué)習(xí)一下2021-11-11Pytest單元測試框架生成HTML測試報告及優(yōu)化的步驟
本文主要介紹了Pytest單元測試框架生成HTML測試報告及優(yōu)化的步驟,文中通過示例代碼介紹的非常詳細(xì),具有一定的參考價值,感興趣的小伙伴們可以參考一下2022-01-01Python通過兩個dataframe用for循環(huán)求笛卡爾積
這篇文章主要介紹了Python通過兩個dataframe用for循環(huán)求笛卡爾積,文中通過示例代碼介紹的非常詳細(xì),對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友可以參考下2020-04-04