Python實現(xiàn)的遞歸神經(jīng)網(wǎng)絡(luò)簡單示例
本文實例講述了Python實現(xiàn)的遞歸神經(jīng)網(wǎng)絡(luò)。分享給大家供大家參考,具體如下:
# Recurrent Neural Networks
import copy, numpy as np
np.random.seed(0)
# compute sigmoid nonlinearity
def sigmoid(x):
output = 1/(1+np.exp(-x))
return output
# convert output of sigmoid function to its derivative
def sigmoid_output_to_derivative(output):
return output*(1-output)
# training dataset generation
int2binary = {}
binary_dim = 8
largest_number = pow(2,binary_dim)
binary = np.unpackbits(
np.array([range(largest_number)],dtype=np.uint8).T,axis=1)
for i in range(largest_number):
int2binary[i] = binary[i]
# input variables
alpha = 0.1
input_dim = 2
hidden_dim = 16
output_dim = 1
# initialize neural network weights
synapse_0 = 2*np.random.random((input_dim,hidden_dim)) - 1
synapse_1 = 2*np.random.random((hidden_dim,output_dim)) - 1
synapse_h = 2*np.random.random((hidden_dim,hidden_dim)) - 1
synapse_0_update = np.zeros_like(synapse_0)
synapse_1_update = np.zeros_like(synapse_1)
synapse_h_update = np.zeros_like(synapse_h)
# training logic
for j in range(10000):
# generate a simple addition problem (a + b = c)
a_int = np.random.randint(largest_number/2) # int version
a = int2binary[a_int] # binary encoding
b_int = np.random.randint(largest_number/2) # int version
b = int2binary[b_int] # binary encoding
# true answer
c_int = a_int + b_int
c = int2binary[c_int]
# where we'll store our best guess (binary encoded)
d = np.zeros_like(c)
overallError = 0
layer_2_deltas = list()
layer_1_values = list()
layer_1_values.append(np.zeros(hidden_dim))
# moving along the positions in the binary encoding
for position in range(binary_dim):
# generate input and output
X = np.array([[a[binary_dim - position - 1],b[binary_dim - position - 1]]])
y = np.array([[c[binary_dim - position - 1]]]).T
# hidden layer (input ~+ prev_hidden)
layer_1 = sigmoid(np.dot(X,synapse_0) + np.dot(layer_1_values[-1],synapse_h))
# output layer (new binary representation)
layer_2 = sigmoid(np.dot(layer_1,synapse_1))
# did we miss?... if so, by how much?
layer_2_error = y - layer_2
layer_2_deltas.append((layer_2_error)*sigmoid_output_to_derivative(layer_2))
overallError += np.abs(layer_2_error[0])
# decode estimate so we can print(it out)
d[binary_dim - position - 1] = np.round(layer_2[0][0])
# store hidden layer so we can use it in the next timestep
layer_1_values.append(copy.deepcopy(layer_1))
future_layer_1_delta = np.zeros(hidden_dim)
for position in range(binary_dim):
X = np.array([[a[position],b[position]]])
layer_1 = layer_1_values[-position-1]
prev_layer_1 = layer_1_values[-position-2]
# error at output layer
layer_2_delta = layer_2_deltas[-position-1]
# error at hidden layer
layer_1_delta = (future_layer_1_delta.dot(synapse_h.T) + layer_2_delta.dot(synapse_1.T)) * sigmoid_output_to_derivative(layer_1)
# let's update all our weights so we can try again
synapse_1_update += np.atleast_2d(layer_1).T.dot(layer_2_delta)
synapse_h_update += np.atleast_2d(prev_layer_1).T.dot(layer_1_delta)
synapse_0_update += X.T.dot(layer_1_delta)
future_layer_1_delta = layer_1_delta
synapse_0 += synapse_0_update * alpha
synapse_1 += synapse_1_update * alpha
synapse_h += synapse_h_update * alpha
synapse_0_update *= 0
synapse_1_update *= 0
synapse_h_update *= 0
# print(out progress)
if j % 1000 == 0:
print("Error:" + str(overallError))
print("Pred:" + str(d))
print("True:" + str(c))
out = 0
for index,x in enumerate(reversed(d)):
out += x*pow(2,index)
print(str(a_int) + " + " + str(b_int) + " = " + str(out))
print("------------")
運行輸出:
Error:[ 3.45638663] Pred:[0 0 0 0 0 0 0 1] True:[0 1 0 0 0 1 0 1] 9 + 60 = 1 ------------ Error:[ 3.63389116] Pred:[1 1 1 1 1 1 1 1] True:[0 0 1 1 1 1 1 1] 28 + 35 = 255 ------------ Error:[ 3.91366595] Pred:[0 1 0 0 1 0 0 0] True:[1 0 1 0 0 0 0 0] 116 + 44 = 72 ------------ Error:[ 3.72191702] Pred:[1 1 0 1 1 1 1 1] True:[0 1 0 0 1 1 0 1] 4 + 73 = 223 ------------ Error:[ 3.5852713] Pred:[0 0 0 0 1 0 0 0] True:[0 1 0 1 0 0 1 0] 71 + 11 = 8 ------------ Error:[ 2.53352328] Pred:[1 0 1 0 0 0 1 0] True:[1 1 0 0 0 0 1 0] 81 + 113 = 162 ------------ Error:[ 0.57691441] Pred:[0 1 0 1 0 0 0 1] True:[0 1 0 1 0 0 0 1] 81 + 0 = 81 ------------ Error:[ 1.42589952] Pred:[1 0 0 0 0 0 0 1] True:[1 0 0 0 0 0 0 1] 4 + 125 = 129 ------------ Error:[ 0.47477457] Pred:[0 0 1 1 1 0 0 0] True:[0 0 1 1 1 0 0 0] 39 + 17 = 56 ------------ Error:[ 0.21595037] Pred:[0 0 0 0 1 1 1 0] True:[0 0 0 0 1 1 1 0] 11 + 3 = 14 ------------
英文原文:https://iamtrask.github.io/2015/11/15/anyone-can-code-lstm/
更多關(guān)于Python相關(guān)內(nèi)容感興趣的讀者可查看本站專題:《Python數(shù)學(xué)運算技巧總結(jié)》、《Python數(shù)據(jù)結(jié)構(gòu)與算法教程》、《Python函數(shù)使用技巧總結(jié)》、《Python字符串操作技巧匯總》、《Python入門與進階經(jīng)典教程》及《Python文件與目錄操作技巧匯總》
希望本文所述對大家Python程序設(shè)計有所幫助。
- 神經(jīng)網(wǎng)絡(luò)(BP)算法Python實現(xiàn)及應(yīng)用
- Python實現(xiàn)的三層BP神經(jīng)網(wǎng)絡(luò)算法示例
- Python編程實現(xiàn)的簡單神經(jīng)網(wǎng)絡(luò)算法示例
- python構(gòu)建深度神經(jīng)網(wǎng)絡(luò)(DNN)
- Python與人工神經(jīng)網(wǎng)絡(luò):使用神經(jīng)網(wǎng)絡(luò)識別手寫圖像介紹
- TensorFlow平臺下Python實現(xiàn)神經(jīng)網(wǎng)絡(luò)
- Python利用邏輯回歸模型解決MNIST手寫數(shù)字識別問題詳解
- 詳解python實現(xiàn)識別手寫MNIST數(shù)字集的程序
- python讀取二進制mnist實例詳解
- python MNIST手寫識別數(shù)據(jù)調(diào)用API的方法
- Python tensorflow實現(xiàn)mnist手寫數(shù)字識別示例【非卷積與卷積實現(xiàn)】
- Python利用全連接神經(jīng)網(wǎng)絡(luò)求解MNIST問題詳解
相關(guān)文章
Python中使用urllib2防止302跳轉(zhuǎn)的代碼例子
這篇文章主要介紹了Python中使用urllib2防止302跳轉(zhuǎn)的代碼例子,即避免302跳轉(zhuǎn)的實現(xiàn),需要的朋友可以參考下2014-07-07
python基于paramiko將文件上傳到服務(wù)器代碼實現(xiàn)
這篇文章主要介紹了python基于paramiko將文件上傳到服務(wù)器代碼實現(xiàn),文中通過示例代碼介紹的非常詳細,對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友可以參考下2019-07-07
基于Python實現(xiàn)中秋佳節(jié)月餅搶購腳本
這篇文章主要介紹了Python版中秋佳節(jié)月餅搶購腳本,今天要用的是一個測試工具的庫Selenium,今天我們就是用它去實現(xiàn)自動化搶購月餅,其實就是用這個工具"模擬"人為操作瀏覽器相應(yīng)的操作,比如登陸,勾選購物車商品,下單購買等等操作,需要的朋友可以參考下2022-09-09

