TensorFlow搭建神經(jīng)網(wǎng)絡(luò)最佳實(shí)踐
一、TensorFLow完整樣例
在MNIST數(shù)據(jù)集上,搭建一個(gè)簡(jiǎn)單神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu),一個(gè)包含ReLU單元的非線性化處理的兩層神經(jīng)網(wǎng)絡(luò)。在訓(xùn)練神經(jīng)網(wǎng)絡(luò)的時(shí)候,使用帶指數(shù)衰減的學(xué)習(xí)率設(shè)置、使用正則化來(lái)避免過(guò)擬合、使用滑動(dòng)平均模型來(lái)使得最終的模型更加健壯。
程序?qū)⒂?jì)算神經(jīng)網(wǎng)絡(luò)前向傳播的部分單獨(dú)定義一個(gè)函數(shù)inference,訓(xùn)練部分定義一個(gè)train函數(shù),再定義一個(gè)主函數(shù)main。
完整程序:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Thu May 25 08:56:30 2017
@author: marsjhao
"""
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
INPUT_NODE = 784 # 輸入節(jié)點(diǎn)數(shù)
OUTPUT_NODE = 10 # 輸出節(jié)點(diǎn)數(shù)
LAYER1_NODE = 500 # 隱含層節(jié)點(diǎn)數(shù)
BATCH_SIZE = 100
LEARNING_RETE_BASE = 0.8 # 基學(xué)習(xí)率
LEARNING_RETE_DECAY = 0.99 # 學(xué)習(xí)率的衰減率
REGULARIZATION_RATE = 0.0001 # 正則化項(xiàng)的權(quán)重系數(shù)
TRAINING_STEPS = 10000 # 迭代訓(xùn)練次數(shù)
MOVING_AVERAGE_DECAY = 0.99 # 滑動(dòng)平均的衰減系數(shù)
# 傳入神經(jīng)網(wǎng)絡(luò)的權(quán)重和偏置,計(jì)算神經(jīng)網(wǎng)絡(luò)前向傳播的結(jié)果
def inference(input_tensor, avg_class, weights1, biases1, weights2, biases2):
# 判斷是否傳入ExponentialMovingAverage類對(duì)象
if avg_class == None:
layer1 = tf.nn.relu(tf.matmul(input_tensor, weights1) + biases1)
return tf.matmul(layer1, weights2) + biases2
else:
layer1 = tf.nn.relu(tf.matmul(input_tensor, avg_class.average(weights1))
+ avg_class.average(biases1))
return tf.matmul(layer1, avg_class.average(weights2))\
+ avg_class.average(biases2)
# 神經(jīng)網(wǎng)絡(luò)模型的訓(xùn)練過(guò)程
def train(mnist):
x = tf.placeholder(tf.float32, [None,INPUT_NODE], name='x-input')
y_ = tf.placeholder(tf.float32, [None, OUTPUT_NODE], name='y-input')
# 定義神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu)的參數(shù)
weights1 = tf.Variable(tf.truncated_normal([INPUT_NODE, LAYER1_NODE],
stddev=0.1))
biases1 = tf.Variable(tf.constant(0.1, shape=[LAYER1_NODE]))
weights2 = tf.Variable(tf.truncated_normal([LAYER1_NODE, OUTPUT_NODE],
stddev=0.1))
biases2 = tf.Variable(tf.constant(0.1, shape=[OUTPUT_NODE]))
# 計(jì)算非滑動(dòng)平均模型下的參數(shù)的前向傳播的結(jié)果
y = inference(x, None, weights1, biases1, weights2, biases2)
global_step = tf.Variable(0, trainable=False) # 定義存儲(chǔ)當(dāng)前迭代訓(xùn)練輪數(shù)的變量
# 定義ExponentialMovingAverage類對(duì)象
variable_averages = tf.train.ExponentialMovingAverage(
MOVING_AVERAGE_DECAY, global_step) # 傳入當(dāng)前迭代輪數(shù)參數(shù)
# 定義對(duì)所有可訓(xùn)練變量trainable_variables進(jìn)行更新滑動(dòng)平均值的操作op
variables_averages_op = variable_averages.apply(tf.trainable_variables())
# 計(jì)算滑動(dòng)模型下的參數(shù)的前向傳播的結(jié)果
average_y = inference(x, variable_averages, weights1, biases1, weights2, biases2)
# 定義交叉熵?fù)p失值
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=y, labels=tf.argmax(y_, 1))
cross_entropy_mean = tf.reduce_mean(cross_entropy)
# 定義L2正則化器并對(duì)weights1和weights2正則化
regularizer = tf.contrib.layers.l2_regularizer(REGULARIZATION_RATE)
regularization = regularizer(weights1) + regularizer(weights2)
loss = cross_entropy_mean + regularization # 總損失值
# 定義指數(shù)衰減學(xué)習(xí)率
learning_rate = tf.train.exponential_decay(LEARNING_RETE_BASE, global_step,
mnist.train.num_examples / BATCH_SIZE, LEARNING_RETE_DECAY)
# 定義梯度下降操作op,global_step參數(shù)可實(shí)現(xiàn)自加1運(yùn)算
train_step = tf.train.GradientDescentOptimizer(learning_rate)\
.minimize(loss, global_step=global_step)
# 組合兩個(gè)操作op
train_op = tf.group(train_step, variables_averages_op)
'''''
# 與tf.group()等價(jià)的語(yǔ)句
with tf.control_dependencies([train_step, variables_averages_op]):
train_op = tf.no_op(name='train')
'''
# 定義準(zhǔn)確率
# 在最終預(yù)測(cè)的時(shí)候,神經(jīng)網(wǎng)絡(luò)的輸出采用的是經(jīng)過(guò)滑動(dòng)平均的前向傳播計(jì)算結(jié)果
correct_prediction = tf.equal(tf.argmax(average_y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# 初始化回話sess并開始迭代訓(xùn)練
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# 驗(yàn)證集待喂入數(shù)據(jù)
validate_feed = {x: mnist.validation.images, y_: mnist.validation.labels}
# 測(cè)試集待喂入數(shù)據(jù)
test_feed = {x: mnist.test.images, y_: mnist.test.labels}
for i in range(TRAINING_STEPS):
if i % 1000 == 0:
validate_acc = sess.run(accuracy, feed_dict=validate_feed)
print('After %d training steps, validation accuracy'
' using average model is %f' % (i, validate_acc))
xs, ys = mnist.train.next_batch(BATCH_SIZE)
sess.run(train_op, feed_dict={x: xs, y_:ys})
test_acc = sess.run(accuracy, feed_dict=test_feed)
print('After %d training steps, test accuracy'
' using average model is %f' % (TRAINING_STEPS, test_acc))
# 主函數(shù)
def main(argv=None):
mnist = input_data.read_data_sets("MNIST_data", one_hot=True)
train(mnist)
# 當(dāng)前的python文件是shell文件執(zhí)行的入口文件,而非當(dāng)做import的python module。
if __name__ == '__main__': # 在模塊內(nèi)部執(zhí)行
tf.app.run() # 調(diào)用main函數(shù)并傳入所需的參數(shù)list
二、分析與改進(jìn)設(shè)計(jì)
1. 程序分析改進(jìn)
第一,計(jì)算前向傳播的函數(shù)inference中需要將所有的變量以參數(shù)的形式傳入函數(shù),當(dāng)神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu)變得更加復(fù)雜、參數(shù)更多的時(shí)候,程序的可讀性將變得非常差。
第二,在程序退出時(shí),訓(xùn)練好的模型就無(wú)法再利用,且大型神經(jīng)網(wǎng)絡(luò)的訓(xùn)練時(shí)間都比較長(zhǎng),在訓(xùn)練過(guò)程中需要每隔一段時(shí)間保存一次模型訓(xùn)練的中間結(jié)果,這樣如果在訓(xùn)練過(guò)程中程序死機(jī),死機(jī)前的最新的模型參數(shù)仍能保留,杜絕了時(shí)間和資源的浪費(fèi)。
第三,將訓(xùn)練和測(cè)試分成兩個(gè)獨(dú)立的程序,將訓(xùn)練和測(cè)試都會(huì)用到的前向傳播的過(guò)程抽象成單獨(dú)的庫(kù)函數(shù)。這樣就保證了在訓(xùn)練和預(yù)測(cè)兩個(gè)過(guò)程中所調(diào)用的前向傳播計(jì)算程序是一致的。
2. 改進(jìn)后程序設(shè)計(jì)
mnist_inference.py
該文件中定義了神經(jīng)網(wǎng)絡(luò)的前向傳播過(guò)程,其中的多次用到的weights定義過(guò)程又單獨(dú)定義成函數(shù)。
通過(guò)tf.get_variable函數(shù)來(lái)獲取變量,在神經(jīng)網(wǎng)絡(luò)訓(xùn)練時(shí)創(chuàng)建這些變量,在測(cè)試時(shí)會(huì)通過(guò)保存的模型加載這些變量的取值,而且可以在變量加載時(shí)將滑動(dòng)平均值重命名。所以可以直接通過(guò)同樣的名字在訓(xùn)練時(shí)使用變量自身,在測(cè)試時(shí)使用變量的滑動(dòng)平均值。
mnist_train.py
該程序給出了神經(jīng)網(wǎng)絡(luò)的完整訓(xùn)練過(guò)程。
mnist_eval.py
在滑動(dòng)平均模型上做測(cè)試。
通過(guò)tf.train.get_checkpoint_state(mnist_train.MODEL_SAVE_PATH)獲取最新模型的文件名,實(shí)際是獲取checkpoint文件的所有內(nèi)容。
三、TensorFlow最佳實(shí)踐樣例
mnist_inference.py
import tensorflow as tf
INPUT_NODE = 784
OUTPUT_NODE = 10
LAYER1_NODE = 500
def get_weight_variable(shape, regularizer):
weights = tf.get_variable("weights", shape,
initializer=tf.truncated_normal_initializer(stddev=0.1))
if regularizer != None:
# 將權(quán)重參數(shù)的正則化項(xiàng)加入至損失集合
tf.add_to_collection('losses', regularizer(weights))
return weights
def inference(input_tensor, regularizer):
with tf.variable_scope('layer1'):
weights = get_weight_variable([INPUT_NODE, LAYER1_NODE], regularizer)
biases = tf.get_variable("biases", [LAYER1_NODE],
initializer=tf.constant_initializer(0.0))
layer1 = tf.nn.relu(tf.matmul(input_tensor, weights) + biases)
with tf.variable_scope('layer2'):
weights = get_weight_variable([LAYER1_NODE, OUTPUT_NODE], regularizer)
biases = tf.get_variable("biases", [OUTPUT_NODE],
initializer=tf.constant_initializer(0.0))
layer2 = tf.matmul(layer1, weights) + biases
return layer2
mnist_train.py
import os
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import mnist_inference
BATCH_SIZE = 100
LEARNING_RATE_BASE = 0.8
LEARNING_RATE_DECAY = 0.99
REGULARIZATION_RATE = 0.0001
TRAINING_STEPS = 10000
MOVING_AVERAGE_DECAY = 0.99
MODEL_SAVE_PATH = "Model_Folder/"
MODEL_NAME = "model.ckpt"
def train(mnist):
# 定義輸入placeholder
x = tf.placeholder(tf.float32, [None, mnist_inference.INPUT_NODE],
name='x-input')
y_ = tf.placeholder(tf.float32, [None, mnist_inference.OUTPUT_NODE],
name='y-input')
# 定義正則化器及計(jì)算前向過(guò)程輸出
regularizer = tf.contrib.layers.l2_regularizer(REGULARIZATION_RATE)
y = mnist_inference.inference(x, regularizer)
# 定義當(dāng)前訓(xùn)練輪數(shù)及滑動(dòng)平均模型
global_step = tf.Variable(0, trainable=False)
variable_averages = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY,
global_step)
variables_averages_op = variable_averages.apply(tf.trainable_variables())
# 定義損失函數(shù)
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y,
labels=tf.argmax(y_, 1))
cross_entropy_mean = tf.reduce_mean(cross_entropy)
loss = cross_entropy_mean + tf.add_n(tf.get_collection('losses'))
# 定義指數(shù)衰減學(xué)習(xí)率
learning_rate = tf.train.exponential_decay(LEARNING_RATE_BASE, global_step,
mnist.train.num_examples / BATCH_SIZE, LEARNING_RATE_DECAY)
# 定義訓(xùn)練操作,包括模型訓(xùn)練及滑動(dòng)模型操作
train_step = tf.train.GradientDescentOptimizer(learning_rate)\
.minimize(loss, global_step=global_step)
train_op = tf.group(train_step, variables_averages_op)
# 定義Saver類對(duì)象,保存模型,TensorFlow持久化類
saver = tf.train.Saver()
# 定義會(huì)話,啟動(dòng)訓(xùn)練過(guò)程
with tf.Session() as sess:
tf.global_variables_initializer().run()
for i in range(TRAINING_STEPS):
xs, ys = mnist.train.next_batch(BATCH_SIZE)
_, loss_value, step = sess.run([train_op, loss, global_step],
feed_dict={x: xs, y_: ys})
if i % 1000 == 0:
print("After %d training step(s), loss on training batch is %g."\
% (step, loss_value))
# save方法的global_step參數(shù)可以讓每個(gè)被保存的模型的文件名末尾加上當(dāng)前訓(xùn)練輪數(shù)
saver.save(sess, os.path.join(MODEL_SAVE_PATH, MODEL_NAME),
global_step=global_step)
def main(argv=None):
mnist = input_data.read_data_sets("MNIST_data", one_hot=True)
train(mnist)
if __name__ == '__main__':
tf.app.run()
mnist_eval.py
import time
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import mnist_inference
import mnist_train
EVAL_INTERVAL_SECS = 10
def evaluate(mnist):
with tf.Graph().as_default() as g:
# 定義輸入placeholder
x = tf.placeholder(tf.float32, [None, mnist_inference.INPUT_NODE],
name='x-input')
y_ = tf.placeholder(tf.float32, [None, mnist_inference.OUTPUT_NODE],
name='y-input')
# 定義feed字典
validate_feed = {x: mnist.validation.images, y_: mnist.validation.labels}
# 測(cè)試時(shí)不加參數(shù)正則化損失
y = mnist_inference.inference(x, None)
# 計(jì)算正確率
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# 加載滑動(dòng)平均模型下的參數(shù)值
variable_averages = tf.train.ExponentialMovingAverage(
mnist_train.MOVING_AVERAGE_DECAY)
saver = tf.train.Saver(variable_averages.variables_to_restore())
# 每隔EVAL_INTERVAL_SECS秒啟動(dòng)一次會(huì)話
while True:
with tf.Session() as sess:
ckpt = tf.train.get_checkpoint_state(mnist_train.MODEL_SAVE_PATH)
if ckpt and ckpt.model_checkpoint_path:
saver.restore(sess, ckpt.model_checkpoint_path)
# 取checkpoint文件中的當(dāng)前迭代輪數(shù)global_step
global_step = ckpt.model_checkpoint_path\
.split('/')[-1].split('-')[-1]
accuracy_score = sess.run(accuracy, feed_dict=validate_feed)
print("After %s training step(s), validation accuracy = %g"\
% (global_step, accuracy_score))
else:
print('No checkpoint file found')
return
time.sleep(EVAL_INTERVAL_SECS)
def main(argv=None):
mnist = input_data.read_data_sets("MNIST_data", one_hot=True)
evaluate(mnist)
if __name__ == '__main__':
tf.app.run()
以上就是本文的全部?jī)?nèi)容,希望對(duì)大家的學(xué)習(xí)有所幫助,也希望大家多多支持腳本之家。
- TensorFlow平臺(tái)下Python實(shí)現(xiàn)神經(jīng)網(wǎng)絡(luò)
- TensorFlow深度學(xué)習(xí)之卷積神經(jīng)網(wǎng)絡(luò)CNN
- TensorFlow實(shí)現(xiàn)卷積神經(jīng)網(wǎng)絡(luò)CNN
- Tensorflow實(shí)現(xiàn)卷積神經(jīng)網(wǎng)絡(luò)用于人臉關(guān)鍵點(diǎn)識(shí)別
- 利用TensorFlow訓(xùn)練簡(jiǎn)單的二分類神經(jīng)網(wǎng)絡(luò)模型的方法
- TensorFlow實(shí)現(xiàn)RNN循環(huán)神經(jīng)網(wǎng)絡(luò)
- tensorflow入門之訓(xùn)練簡(jiǎn)單的神經(jīng)網(wǎng)絡(luò)方法
- TensorFlow 實(shí)戰(zhàn)之實(shí)現(xiàn)卷積神經(jīng)網(wǎng)絡(luò)的實(shí)例講解
- tensorflow建立一個(gè)簡(jiǎn)單的神經(jīng)網(wǎng)絡(luò)的方法
- tensorflow學(xué)習(xí)筆記之簡(jiǎn)單的神經(jīng)網(wǎng)絡(luò)訓(xùn)練和測(cè)試
相關(guān)文章
Python進(jìn)階之列表推導(dǎo)與生成器表達(dá)式詳解
這篇文章主要為大家詳細(xì)介紹了Python中列表推導(dǎo)與生成器表達(dá)式的相關(guān)知識(shí),文中的示例代碼講解詳細(xì),具有一定的參考價(jià)值,需要的可以參考一下2023-04-04
Python 解析簡(jiǎn)單的XML數(shù)據(jù)
這篇文章主要介紹了Python 如何解析簡(jiǎn)單的XML數(shù)據(jù),文中講解非常細(xì)致,代碼幫助大家更好的理解和學(xué)習(xí),感興趣的朋友可以了解下2020-07-07

