欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

TensorFlow深度學習另一種程序風格實現(xiàn)卷積神經(jīng)網(wǎng)絡(luò)

 更新時間:2021年11月04日 08:47:08   作者:零尾  
這篇文章主要介紹了TensorFlow卷積神經(jīng)網(wǎng)絡(luò)的另一種程序風格實現(xiàn)方式示例,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進步
import tensorflow as tf
import numpy as np
import input_data
mnist = input_data.read_data_sets('data/', one_hot=True)
print("MNIST ready")
n_input  = 784 # 28*28的灰度圖,像素個數(shù)784
n_output = 10  # 是10分類問題
# 權(quán)重項
weights = {
    # conv1,參數(shù)[3, 3, 1, 32]分別指定了filter的h、w、所連接輸入的維度、filter的個數(shù)即產(chǎn)生特征圖個數(shù)
    'wc1': tf.Variable(tf.random_normal([3, 3, 1, 32], stddev=0.1)),   
    # conv2,這里參數(shù)3,3同上,32是當前連接的深度是32,即前面特征圖的個數(shù),64為輸出的特征圖的個數(shù)
    'wc2': tf.Variable(tf.random_normal([3, 3, 32, 64], stddev=0.1)), 
    # fc1,將特征圖轉(zhuǎn)換為向量,1024由自己定義
    'wd1': tf.Variable(tf.random_normal([7*7*64, 1024], stddev=0.1)), 
    # fc2,做10分類任務(wù),前面連1024,輸出10分類
    'wd2': tf.Variable(tf.random_normal([1024, n_output], stddev=0.1)) 
}
"""
特征圖大小計算:
f_w = (w-f+2*pad)/s + 1 = (28-3+2*1)/1 + 1 = 28 # 說明經(jīng)過卷積層并沒有改變圖片的大小
f_h = (h-f+2*pad)/s + 1 = (28-3+2*1)/1 + 1 = 28
# 特征圖的大小是經(jīng)過池化層后改變的
第一次pooling后28*28變?yōu)?4*14
第二次pooling后14*14變?yōu)?*7,即最終是一個7*7*64的特征圖

"""
# 偏置項
biases = {
    'bc1': tf.Variable(tf.random_normal([32], stddev=0.1)),      # conv1,對應(yīng)32個特征圖
    'bc2': tf.Variable(tf.random_normal([64], stddev=0.1)),      # conv2,對應(yīng)64個特征圖
    'bd1': tf.Variable(tf.random_normal([1024], stddev=0.1)),    # fc1,對應(yīng)1024個向量
    'bd2': tf.Variable(tf.random_normal([n_output], stddev=0.1)) # fc2,對應(yīng)10個輸出
}

def conv_basic(_input, _w, _b, _keep_prob):
    # INPUT
    # 對圖像做預處理,轉(zhuǎn)換為tf支持的格式,即[n, h, w, c],-1是確定好其它3維后,讓tf去推斷剩下的1維
    _input_r = tf.reshape(_input, shape=[-1, 28, 28, 1]) 

    # CONV LAYER 1
    _conv1 = tf.nn.conv2d(_input_r, _w['wc1'], strides=[1, 1, 1, 1], padding='SAME') 
    # [1, 1, 1, 1]分別代表batch_size、h、w、c的stride
    # padding有兩種選擇:'SAME'(窗口滑動時,像素不夠會自動補0)或'VALID'(不夠就跳過)兩種選擇
    _conv1 = tf.nn.relu(tf.nn.bias_add(_conv1, _b['bc1'])) # 卷積層后連激活函數(shù)
    # 最大值池化,[1, 2, 2, 1]其中1,1對應(yīng)batch_size和channel,2,2對應(yīng)2*2的池化
    _pool1 = tf.nn.max_pool(_conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
    # 隨機殺死一些神經(jīng)元,_keepratio為保留神經(jīng)元比例,如0.6 
    _pool_dr1 = tf.nn.dropout(_pool1, _keep_prob) 

    # CONV LAYER 2
    _conv2 = tf.nn.conv2d(_pool_dr1, _w['wc2'], strides=[1, 1, 1, 1], padding='SAME')
    _conv2 = tf.nn.relu(tf.nn.bias_add(_conv2, _b['bc2']))
    _pool2 = tf.nn.max_pool(_conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
    _pool_dr2 = tf.nn.dropout(_pool2, _keep_prob) # dropout

    # VECTORIZE向量化
    # 定義全連接層的輸入,把pool2的輸出做一個reshape,變?yōu)橄蛄康男问?
    _densel = tf.reshape(_pool_dr2, [-1, _w['wd1'].get_shape().as_list()[0]]) 

    # FULLY CONNECTED LAYER 1
    _fc1 = tf.nn.relu(tf.add(tf.matmul(_densel, _w['wd1']), _b['bd1'])) # w*x+b,再通過relu
    _fc_dr1 = tf.nn.dropout(_fc1, _keep_prob) # dropout

    # FULLY CONNECTED LAYER 2
    _out = tf.add(tf.matmul(_fc_dr1, _w['wd2']), _b['bd2']) # w*x+b,得到結(jié)果

    # RETURN
    out = {'input_r': _input_r, 'conv1': _conv1, 'pool1': _pool1, 'pool_dr1': _pool_dr1,
           'conv2': _conv2, 'pool2': _pool2, 'pool_dr2': _pool_dr2, 'densel': _densel,
           'fc1': _fc1, 'fc_dr1': _fc_dr1, 'out': _out
           }
    return out
print("CNN READY")
x = tf.placeholder(tf.float32, [None, n_input]) # 用placeholder先占地方,樣本個數(shù)不確定為None
y = tf.placeholder(tf.float32, [None, n_output]) # 用placeholder先占地方,樣本個數(shù)不確定為None
keep_prob = tf.placeholder(tf.float32)
_pred = conv_basic(x, weights, biases, keep_prob)['out'] # 前向傳播的預測值
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(_pred, y)) # 交叉熵損失函數(shù)
optm = tf.train.AdamOptimizer(0.001).minimize(cost) # 梯度下降優(yōu)化器
_corr = tf.equal(tf.argmax(_pred, 1), tf.argmax(y, 1)) # 對比預測值索引和實際label索引,相同返回True,不同返回False
accr = tf.reduce_mean(tf.cast(_corr, tf.float32)) # 將True或False轉(zhuǎn)換為1或0,并對所有的判斷結(jié)果求均值
init = tf.global_variables_initializer()
print("FUNCTIONS READY")

# 上面神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu)定義好之后,下面定義一些超參數(shù)
training_epochs = 1000 # 所有樣本迭代1000次
batch_size = 100 # 每進行一次迭代選擇100個樣本
display_step = 1
# LAUNCH THE GRAPH
sess = tf.Session() # 定義一個Session
sess.run(init) # 在sess里run一下初始化操作
# OPTIMIZE
for epoch in range(training_epochs):
    avg_cost = 0.
    total_batch = int(mnist.train.num_examples/batch_size)
    for i in range(total_batch):
        batch_xs, batch_ys = mnist.train.next_batch(batch_size) # 逐個batch的去取數(shù)據(jù)
        sess.run(optm, feed_dict={x: batch_xs, y: batch_ys, keep_prob:0.5})
        avg_cost += sess.run(cost, feed_dict={x: batch_xs, y: batch_ys, keep_prob:1.0})/total_batch
    if epoch % display_step == 0:
        train_accuracy = sess.run(accr, feed_dict={x: batch_xs, y: batch_ys, keep_prob: 1.0})
        test_accuracy = sess.run(accr, feed_dict={x: mnist.test.images, y: mnist.test.labels, keep_prob:1.0})
        print("Epoch: %03d/%03d cost: %.9f TRAIN ACCURACY: %.3f TEST ACCURACY: %.3f"
              % (epoch, training_epochs, avg_cost, train_accuracy, test_accuracy))
print("DONE")

我用的顯卡是GTX960,在跑這個卷積神經(jīng)網(wǎng)絡(luò)的時候,第一次filter分別設(shè)的是64和128,結(jié)果報蜜汁錯誤了,反正就是我顯存不足,所以改成了32和64,讓特征圖少一點。所以,是讓我換1080的意思嘍

I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\gpu\gpu_device.cc:885] Found device 0 with properties: 
name: GeForce GTX 960
major: 5 minor: 2 memoryClockRate (GHz) 1.304
pciBusID 0000:01:00.0
Total memory: 4.00GiB
Free memory: 3.33GiB
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\gpu\gpu_device.cc:906] DMA: 0 
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\gpu\gpu_device.cc:916] 0:   Y 
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\gpu\gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 960, pci bus id: 0000:01:00.0)
W c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\bfc_allocator.cc:217] Ran out of memory trying to allocate 2.59GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
W c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\bfc_allocator.cc:217] Ran out of memory trying to allocate 1.34GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
W c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\bfc_allocator.cc:217] Ran out of memory trying to allocate 2.10GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
W c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\bfc_allocator.cc:217] Ran out of memory trying to allocate 3.90GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
Epoch: 000/1000 cost: 0.517761162 TRAIN ACCURACY: 0.970 TEST ACCURACY: 0.967
Epoch: 001/1000 cost: 0.093012387 TRAIN ACCURACY: 0.960 TEST ACCURACY: 0.979
.
.
.
省略

以上就是TensorFlow另一種程序風格實現(xiàn)卷積神經(jīng)網(wǎng)絡(luò)的詳細內(nèi)容,更多關(guān)于TensorFlow卷積神經(jīng)網(wǎng)絡(luò)的資料請關(guān)注腳本之家其它相關(guān)文章!

相關(guān)文章

  • python爬蟲之線程池和進程池功能與用法詳解

    python爬蟲之線程池和進程池功能與用法詳解

    這篇文章主要介紹了python爬蟲之線程池和進程池功能與用法,結(jié)合實例形式分析了Python基于線程池與進程池的爬蟲功能相關(guān)操作技巧與使用注意事項,需要的朋友可以參考下
    2018-08-08
  • Python and、or以及and-or語法總結(jié)

    Python and、or以及and-or語法總結(jié)

    這篇文章主要介紹了Python and、or以及and-or語法總結(jié),本文分別給出實例講解它們的使用方法,需要的朋友可以參考下
    2015-04-04
  • python內(nèi)置數(shù)據(jù)類型之列表操作

    python內(nèi)置數(shù)據(jù)類型之列表操作

    數(shù)據(jù)類型是一種值的集合以及定義在這種值上的一組操作。這篇文章主要介紹了python內(nèi)置數(shù)據(jù)類型之列表的相關(guān)知識,非常不錯,具有一定的參考借鑒價值,需要的朋友可以參考下
    2018-11-11
  • Python中的常用函數(shù)使用及說明

    Python中的常用函數(shù)使用及說明

    這篇文章主要介紹了Python中的常用函數(shù)使用及說明,具有很好的參考價值,希望對大家有所幫助,如有錯誤或未考慮完全的地方,望不吝賜教
    2024-02-02
  • python-leetcode求區(qū)間[M,N]內(nèi)的所有素數(shù)的個數(shù)實現(xiàn)方式

    python-leetcode求區(qū)間[M,N]內(nèi)的所有素數(shù)的個數(shù)實現(xiàn)方式

    這篇文章主要介紹了python-leetcode求區(qū)間[M,N]內(nèi)的所有素數(shù)的個數(shù)實現(xiàn)方式,具有很好的參考價值,希望對大家有所幫助,如有錯誤或未考慮完全的地方,望不吝賜教
    2023-08-08
  • pytorch?K折交叉驗證過程說明及實現(xiàn)方式

    pytorch?K折交叉驗證過程說明及實現(xiàn)方式

    這篇文章主要介紹了pytorch?K折交叉驗證過程說明及實現(xiàn)方式,具有很好的參考價值,希望對大家有所幫助。如有錯誤或未考慮完全的地方,望不吝賜教
    2022-11-11
  • 最新評論