深度學習TextLSTM的tensorflow1.14實現示例
對單詞最后一個字母的預測
LSTM 的原理自己找,這里只給出簡單的示例代碼,就是對單詞最后一個字母的預測。
# LSTM 的原理自己找,這里只給出簡單的示例代碼 import tensorflow as tf import numpy as np tf.reset_default_graph() # 預測最后一個字母 words = ['make','need','coal','word','love','hate','live','home','hash','star'] # 字典集 chars = [c for c in 'abcdefghijklmnopqrstuvwxyz'] # 生成字符索引字典 word2idx = {v:k for k,v in enumerate(chars)} idx2word = {k:v for k,v in enumerate(chars)} V = len(chars) # 字典大小 step = 3 # 時間步長大小 hidden = 50 # 隱藏層大小 dim = 32 # 詞向量維度 def make_batch(words): input_batch, target_batch = [], [] for word in words: input = [word2idx[c] for c in word[:-1]] # 除最后一個字符的所有字符當作輸入 target = word2idx[word[-1]] # 最后一個字符當作標簽 input_batch.append(input) target_batch.append(np.eye(V)[target]) # 這里將標簽轉換為 one-hot ,后面計算 softmax_cross_entropy_with_logits_v2 的時候會用到 return input_batch, target_batch # 初始化詞向量 embedding = tf.get_variable("embedding", shape=[V, dim], initializer=tf.random_normal_initializer) X = tf.placeholder(tf.int32, [None, step]) # 將輸入進行詞嵌入轉換 XX = tf.nn.embedding_lookup(embedding, X) Y = tf.placeholder(tf.int32, [None, V]) # 定義 LSTM cell cell = tf.nn.rnn_cell.BasicLSTMCell(hidden) # 隱層計算結果 outputs, states = tf.nn.dynamic_rnn(cell, XX, dtype=tf.float32) # output: [batch_size, step, hidden] states: (c=[batch_size, hidden], h=[batch_size, hidden]) # 隱層連接分類器的權重和偏置參數 W = tf.Variable(tf.random_normal([hidden, V])) b = tf.Variable(tf.random_normal([V])) # 這里只用到了最后輸出的 c 向量 states[0] (也可以用所有時間點的輸出特征向量) feature = tf.matmul(states[0], W) + b # [batch_size, n_class] # 計算損失并進行迭代優(yōu)化 cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=feature, labels=Y)) optimizer = tf.train.AdamOptimizer(0.001).minimize(cost) # 預測 prediction = tf.argmax(feature, 1) # 初始化 tf init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) # 生產輸入和標簽 input_batch, target_batch = make_batch(words) # 訓練模型 for epoch in range(1000): _, loss = sess.run([optimizer, cost], feed_dict={X:input_batch, Y:target_batch}) if (epoch+1)%100 == 0: print('epoch: ', '%04d'%(epoch+1), 'cost=', '%04f'%(loss)) # 預測結果 predict = sess.run([prediction], feed_dict={X:input_batch}) print([words[i][:-1]+' '+idx2word[c] for i,c in enumerate(predict[0])])
結果打印
epoch: 0100 cost= 0.003784
epoch: 0200 cost= 0.001891
epoch: 0300 cost= 0.001122
epoch: 0400 cost= 0.000739
epoch: 0500 cost= 0.000522
epoch: 0600 cost= 0.000388
epoch: 0700 cost= 0.000300
epoch: 0800 cost= 0.000238
epoch: 0900 cost= 0.000193
epoch: 1000 cost= 0.000160
['mak e', 'nee d', 'coa l', 'wor d', 'lov e', 'hat e', 'liv e', 'hom e', 'has h', 'sta r']
以上就是深度學習TextLSTM的tensorflow1.14實現示例的詳細內容,更多關于深度學習TextLSTM tensorflow的資料請關注腳本之家其它相關文章!
相關文章
PyTorch解決ModuleNotFoundError: No module named
本文主要介紹了PyTorch解決ModuleNotFoundError: No module named ‘torch’,這個錯誤意味著我們的Python環(huán)境中沒有安裝PyTorch庫,無法正常使用其功能,下面就來具體介紹一下2024-03-03