Keras中Sequential模型和Functional模型的區(qū)別及說明
Sequential模型和Functional模型區(qū)別
Sequential模型
只有一個輸入和輸出,而且網(wǎng)絡是層的線性堆疊

可以通過向Sequential模型傳遞一個layer的list來構造該模型:
from keras.models import Sequential
from keras.layers import Dense, Activation
#Sequential的第一層需要接受一個關于輸入數(shù)據(jù)shape的參數(shù),后面的各個層則可以自動的推導出中間數(shù)據(jù)的shape
model = Sequential()
model.add(Dense(32, input_shape=(784,)))
model.add(Activation('relu'))model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
# Generate dummy data
import numpy as np
data = np.random.random((1000, 100))
labels = np.random.randint(2, size=(1000, 1))
# Train the model, iterating on the data in batches of 32 samples
model.fit(data, labels, epochs=10, batch_size=32)
# For a single-input model with 10 classes (categorical classification):
model = Sequential()
model.add(Dense(32, activation='relu', input_dim=100))
model.add(Dense(10, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
# Generate dummy data
import numpy as np
data = np.random.random((1000, 100))
labels = np.random.randint(10, size=(1000, 1))
# Convert labels to categorical one-hot encoding
one_hot_labels = keras.utils.to_categorical(labels, num_classes=10)
# Train the model, iterating on the data in batches of 32 samples
model.fit(data, one_hot_labels, epochs=10, batch_size=32)Functional模型
區(qū)別:
1.層對象接受張量為參數(shù),返回一個張量。
2.輸入是張量,輸出也是張量的一個框架就是一個模型,通過Model定義。
from keras.models import Sequential, Model
from keras import layers
from keras import Input
"""
# Sequential模型實現(xiàn)
seq_model = Sequential()
seq_model.add(layers.Dense(32, activation='relu', input_shape=(64,)))
seq_model.add(layers.Dense(32, activation='relu'))
seq_model.add(layers.Dense(10, activation='softmax'))
"""
# 對應的函數(shù)式模型實現(xiàn)
input_tensor = Input(shape=(64,))
x = layers.Dense(32, activation='relu')(input_tensor)
x = layers.Dense(32, activation='relu')(x)
output_tensor = layers.Dense(10, activation='softmax')(x)
model = Model(input_tensor, output_tensor)
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels) # starts training
model.summary() # 查看模型3.所有的模型都是可調(diào)用的,就像層一樣
這種方式可以允許你快速的創(chuàng)建能處理序列信號的模型,你可以很快將一個圖像分類的模型變?yōu)橐粋€對視頻分類的模型,只需要一行代碼:
from keras.layers import TimeDistributed # Input tensor for sequences of 20 timesteps, # each containing a 784-dimensional vector input_sequences = Input(shape=(20, 784)) # This applies our previous model to every timestep in the input sequences. # the output of the previous model was a 10-way softmax, # so the output of the layer below will be a sequence of 20 vectors of size 10. processed_sequences = TimeDistributed(model)(input_sequences)
4.構建具有多個輸入的模型或多個輸出的模型

from keras.models import Model
from keras import layers
from keras import Input
text_vocabulary_size = 10000
question_vocabulary_size = 10000
answer_vocabulary_size = 500
# 文本輸入是一個長度可變的整數(shù)序列。注意,你可以選擇對輸入進行命名
text_input = Input(shape=(None,), dtype='int32', name='text')
embedded_text = layers.Embedding(
text_vocabulary_size, 64)(text_input)
encoded_text = layers.LSTM(32)(embedded_text)
question_input = Input(shape=(None,), dtype='int32', name='question')
embedded_question = layers.Embedding(
question_vocabulary_size, 32)(question_input)
encoded_question = layers.LSTM(16)(embedded_question)
# 將編碼后的問題和文本連接起來
concatenated = layers.concatenate([encoded_text, encoded_question],
axis=-1)
answer = layers.Dense(answer_vocabulary_size,
activation='softmax')(concatenated)
# 在模型實例化時,指定兩個輸入和輸出
model = Model([text_input, question_input], answer)
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['acc'])
import numpy as np
num_samples = 1000
max_length = 100
text = np.random.randint(1, text_vocabulary_size,
size=(num_samples, max_length))
question = np.random.randint(1, question_vocabulary_size,
size=(num_samples, max_length))
answers = np.random.randint(answer_vocabulary_size, size=(num_samples))
answers = keras.utils.to_categorical(answers, answer_vocabulary_size)
model.fit([text, question], answers, epochs=10, batch_size=128)
# 使用輸入組成的字典來擬合(只有對輸入進行命名之后才能用這種方法)使用輸入組成的列表來擬合
model.fit({'text': text, 'question': question}, answers,
epochs=10, batch_size=128)

from keras import layers
from keras import Input
from keras.models import Model
vocabulary_size = 50000
num_income_groups = 10
posts_input = Input(shape=(None,), dtype='int32', name='posts')
embedded_posts = layers.Embedding(256, vocabulary_size)(posts_input)
x = layers.Conv1D(128, 5, activation='relu')(embedded_posts)
x = layers.MaxPooling1D(5)(x)
x = layers.Conv1D(256, 5, activation='relu')(x)
x = layers.Conv1D(256, 5, activation='relu')(x)
x = layers.MaxPooling1D(5)(x)
x = layers.Conv1D(256, 5, activation='relu')(x)
x = layers.Conv1D(256, 5, activation='relu')(x)
x = layers.GlobalMaxPooling1D()(x)
x = layers.Dense(128, activation='relu')(x)
# 注意,輸出層都具有名稱
age_prediction = layers.Dense(1, name='age')(x)
income_prediction = layers.Dense(num_income_groups,
activation='softmax',
name='income')(x)
gender_prediction = layers.Dense(1, activation='sigmoid', name='gender')(x)
model = Model(posts_input,
[age_prediction, income_prediction, gender_prediction])
#制定不同的損失函數(shù)
model.compile(optimizer='rmsprop',
loss=['mse', 'categorical_crossentropy', 'binary_crossentropy'])
# 與上述寫法等效(只有輸出層具有名稱時才能采用這種寫法)
model.compile(optimizer='rmsprop',
loss={'age': 'mse',
'income': 'categorical_crossentropy',
'gender': 'binary_crossentropy'})
#假設 age_targets、income_targets 和gender_targets 都是 Numpy 數(shù)組
model.fit(posts, [age_targets, income_targets, gender_targets],
epochs=10, batch_size=64)
# 等效
model.fit(posts, {'age': age_targets,
'income': income_targets,
'gender': gender_targets},
epochs=10, batch_size=64)
由于,年齡回歸任務的均方誤差(MSE)損失值通常在 3~5 左右,而用于性別分類任務的交叉熵損失值可能低至 0.1。在這種情況下,為了平衡不同損失的貢獻,我們可以讓交叉熵損失的權重取 10,而 MSE 損失的權重取 0.5model.compile(optimizer='rmsprop',
loss=['mse', 'categorical_crossentropy', 'binary_crossentropy'],
loss_weights=[0.25, 1., 10.])
# 等效
model.compile(optimizer='rmsprop',
loss={'age': 'mse',
'income': 'categorical_crossentropy',
'gender': 'binary_crossentropy'},
loss_weights={'age': 0.25,
'income': 1.,
'gender': 10.})總結(jié)
以上為個人經(jīng)驗,希望能給大家一個參考,也希望大家多多支持腳本之家。
相關文章
詳解PyQt5中textBrowser顯示print語句輸出的簡單方法
這篇文章主要介紹了詳解PyQt5中textBrowser顯示print語句輸出的簡單方法,文中通過示例代碼介紹的非常詳細,對大家的學習或者工作具有一定的參考學習價值,需要的朋友們下面隨著小編來一起學習學習吧2020-08-08
python實現(xiàn)簡單socket程序在兩臺電腦之間傳輸消息的方法
這篇文章主要介紹了python實現(xiàn)簡單socket程序在兩臺電腦之間傳輸消息的方法,涉及Python操作socket的技巧,具有一定參考借鑒價值,需要的朋友可以參考下2015-03-03
Django結(jié)合WebSockets和異步視圖實現(xiàn)實時通信功能
在現(xiàn)代Web應用程序中,實時通信已經(jīng)成為了必不可少的功能之一,這篇文章主要介紹了如何利用Django中的WebSockets和異步視圖來實現(xiàn)實時通信功能,需要的可以參考下2024-04-04
python使用PyV8執(zhí)行javascript代碼示例分享
這篇文章主要介紹了python使用PyV8執(zhí)行javascript的小示例,大家參考使用吧2013-12-12
Python讀取Pickle文件信息并計算與當前時間間隔的方法分析
這篇文章主要介紹了Python讀取Pickle文件信息并計算與當前時間間隔的方法,涉及Python基于pickle模塊操作文件屬性相關實現(xiàn)技巧,需要的朋友可以參考下2019-01-01

