解決keras使用cov1D函數(shù)的輸入問(wèn)題
解決了以下錯(cuò)誤:
1.ValueError: Input 0 is incompatible with layer conv1d_1: expected ndim=3, found ndim=4
2.ValueError: Error when checking target: expected dense_3 to have 3 dimensions, but got array with …
1.ValueError: Input 0 is incompatible with layer conv1d_1: expected ndim=3, found ndim=4
錯(cuò)誤代碼:
model.add(Conv1D(8, kernel_size=3, strides=1, padding='same', input_shape=(x_train.shape))
或者
model.add(Conv1D(8, kernel_size=3, strides=1, padding='same', input_shape=(x_train.shape[1:])))
這是因?yàn)槟P洼斎氲木S數(shù)有誤,在使用基于tensorflow的keras中,cov1d的input_shape是二維的,應(yīng)該:
1、reshape x_train的形狀
x_train=x_train.reshape((x_train.shape[0],x_train.shape[1],1))
x_test = x_test.reshape((x_test.shape[0], x_test.shape[1],1))
2、改變input_shape
model = Sequential()
model.add(Conv1D(8, kernel_size=3, strides=1, padding='same', input_shape=(x_train.shape[1],1)))
大神原文:
The input shape is wrong, it should be input_shape = (1, 3253) for Theano or (3253, 1) for TensorFlow. The input shape doesn't include the number of samples.
Then you need to reshape your data to include the channels axis:
x_train = x_train.reshape((500000, 1, 3253))
Or move the channels dimension to the end if you use TensorFlow. After these changes it should work.
2.ValueError: Error when checking target: expected dense_3 to have 3 dimensions, but got array with …
出現(xiàn)此問(wèn)題是因?yàn)閥label的維數(shù)與x_train x_test不符,既然將x_train x_test都reshape了,那么也需要對(duì)y進(jìn)行reshape。
解決辦法:
同時(shí)對(duì)照x_train改變ylabel的形狀
t_train=t_train.reshape((t_train.shape[0],1))
t_test = t_test.reshape((t_test.shape[0],1))
附:
修改完的代碼:
import warnings warnings.filterwarnings("ignore") import os os.environ["CUDA_VISIBLE_DEVICES"] = "0" import pandas as pd import numpy as np import matplotlib # matplotlib.use('Agg') import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn import preprocessing from keras.models import Sequential from keras.layers import Dense, Dropout, BatchNormalization, Activation, Flatten, Conv1D from keras.callbacks import LearningRateScheduler, EarlyStopping, ModelCheckpoint, ReduceLROnPlateau from keras import optimizers from keras.regularizers import l2 from keras.models import load_model df_train = pd.read_csv('./input/train_V2.csv') df_test = pd.read_csv('./input/test_V2.csv') df_train.drop(df_train.index[[2744604]],inplace=True)#去掉nan值 df_train["distance"] = df_train["rideDistance"]+df_train["walkDistance"]+df_train["swimDistance"] # df_train["healthpack"] = df_train["boosts"] + df_train["heals"] df_train["skill"] = df_train["headshotKills"]+df_train["roadKills"] df_test["distance"] = df_test["rideDistance"]+df_test["walkDistance"]+df_test["swimDistance"] # df_test["healthpack"] = df_test["boosts"] + df_test["heals"] df_test["skill"] = df_test["headshotKills"]+df_test["roadKills"] df_train_size = df_train.groupby(['matchId','groupId']).size().reset_index(name='group_size') df_test_size = df_test.groupby(['matchId','groupId']).size().reset_index(name='group_size') df_train_mean = df_train.groupby(['matchId','groupId']).mean().reset_index() df_test_mean = df_test.groupby(['matchId','groupId']).mean().reset_index() df_train = pd.merge(df_train, df_train_mean, suffixes=["", "_mean"], how='left', on=['matchId', 'groupId']) df_test = pd.merge(df_test, df_test_mean, suffixes=["", "_mean"], how='left', on=['matchId', 'groupId']) del df_train_mean del df_test_mean df_train = pd.merge(df_train, df_train_size, how='left', on=['matchId', 'groupId']) df_test = pd.merge(df_test, df_test_size, how='left', on=['matchId', 'groupId']) del df_train_size del df_test_size target = 'winPlacePerc' train_columns = list(df_test.columns) """ remove some columns """ train_columns.remove("Id") train_columns.remove("matchId") train_columns.remove("groupId") train_columns_new = [] for name in train_columns: if '_' in name: train_columns_new.append(name) train_columns = train_columns_new # print(train_columns) X = df_train[train_columns] Y = df_test[train_columns] T = df_train[target] del df_train x_train, x_test, t_train, t_test = train_test_split(X, T, test_size = 0.2, random_state = 1234) # scaler = preprocessing.MinMaxScaler(feature_range=(-1, 1)).fit(x_train) scaler = preprocessing.QuantileTransformer().fit(x_train) x_train = scaler.transform(x_train) x_test = scaler.transform(x_test) Y = scaler.transform(Y) x_train=x_train.reshape((x_train.shape[0],x_train.shape[1],1)) x_test = x_test.reshape((x_test.shape[0], x_test.shape[1],1)) t_train=t_train.reshape((t_train.shape[0],1)) t_test = t_test.reshape((t_test.shape[0],1)) model = Sequential() model.add(Conv1D(8, kernel_size=3, strides=1, padding='same', input_shape=(x_train.shape[1],1))) model.add(BatchNormalization()) model.add(Conv1D(8, kernel_size=3, strides=1, padding='same')) model.add(Conv1D(16, kernel_size=3, strides=1, padding='valid')) model.add(BatchNormalization()) model.add(Conv1D(16, kernel_size=3, strides=1, padding='same')) model.add(Conv1D(32, kernel_size=3, strides=1, padding='valid')) model.add(BatchNormalization()) model.add(Conv1D(32, kernel_size=3, strides=1, padding='same')) model.add(Conv1D(32, kernel_size=3, strides=1, padding='same')) model.add(Conv1D(64, kernel_size=3, strides=1, padding='same')) model.add(Activation('tanh')) model.add(Flatten()) model.add(Dropout(0.5)) # model.add(Dropout(0.25)) model.add(Dense(512,kernel_initializer='he_normal', activation='relu', W_regularizer=l2(0.01))) model.add(Dense(128,kernel_initializer='he_normal', activation='relu', W_regularizer=l2(0.01))) model.add(Dense(1, kernel_initializer='normal', activation='sigmoid')) optimizers.Adam(lr=0.01, epsilon=1e-8, decay=1e-4) model.compile(optimizer=optimizer, loss='mse', metrics=['mae']) model.summary() ng = EarlyStopping(monitor='val_mean_absolute_error', mode='min', patience=4, verbose=1) # model_checkpoint = ModelCheckpoint(filepath='best_model.h5', monitor='val_mean_absolute_error', mode = 'min', save_best_only=True, verbose=1) # reduce_lr = ReduceLROnPlateau(monitor='val_mean_absolute_error', mode = 'min',factor=0.5, patience=3, min_lr=0.0001, verbose=1) history = model.fit(x_train, t_train, validation_data=(x_test, t_test), epochs=30, batch_size=32768, callbacks=[early_stopping], verbose=1)predict(Y) pred = pred.ravel()
補(bǔ)充知識(shí):Keras Conv1d 參數(shù)及輸入輸出詳解
Conv1d(in_channels,out_channels,kernel_size,stride=1,padding=0,dilation=1,groups=1,bias=True)
filters:卷積核的數(shù)目(即輸出的維度)
kernel_size: 整數(shù)或由單個(gè)整數(shù)構(gòu)成的list/tuple,卷積核的空域或時(shí)域窗長(zhǎng)度
strides: 整數(shù)或由單個(gè)整數(shù)構(gòu)成的list/tuple,為卷積的步長(zhǎng)。任何不為1的strides均為任何不為1的dilation_rata均不兼容
padding: 補(bǔ)0策略,為”valid”,”same”或”casual”,”casual”將產(chǎn)生因果(膨脹的)卷積,即output[t]不依賴于input[t+1:]。當(dāng)對(duì)不能違反事件順序的時(shí)序信號(hào)建模時(shí)有用?!皏alid”代表只進(jìn)行有效的卷積,即對(duì)邊界數(shù)據(jù)不處理。“same”代表保留邊界處的卷積結(jié)果,通常會(huì)導(dǎo)致輸出shape與輸入shape相同。
activation:激活函數(shù),為預(yù)定義的激活函數(shù)名,或逐元素的Theano函數(shù)。如果不指定該函數(shù),將不會(huì)使用任何激活函數(shù)(即使用線性激活函數(shù):a(x)=x)
model.add(Conv1D(filters=nn_params["input_filters"], kernel_size=nn_params["filter_length"], strides=1, padding='valid', activation=nn_params["activation"], kernel_regularizer=l2(nn_params["reg"])))
例:輸入維度為(None,1000,4)
第一維度:None
第二維度:
output_length = int((input_length - nn_params["filter_length"] + 1))
在此情況下為:
output_length = (1000 + 2*padding - filters +1)/ strides = (1000 + 2*0 -32 +1)/1 = 969
第三維度:filters
以上這篇解決keras使用cov1D函數(shù)的輸入問(wèn)題就是小編分享給大家的全部?jī)?nèi)容了,希望能給大家一個(gè)參考,也希望大家多多支持腳本之家。
相關(guān)文章
30道python自動(dòng)化測(cè)試面試題與答案匯總
對(duì)于機(jī)器學(xué)習(xí)算法工程師而言,Python是不可或缺的語(yǔ)言,它的優(yōu)美與簡(jiǎn)潔令人無(wú)法自拔,下面這篇文章主要給大家介紹了關(guān)于30道python自動(dòng)化測(cè)試面試題與答案匯總的相關(guān)資料,需要的朋友可以參考下2023-03-03python文本數(shù)據(jù)處理學(xué)習(xí)筆記詳解
這篇文章主要為大家詳細(xì)介紹了python文本數(shù)據(jù)處理學(xué)習(xí)筆記,具有一定的參考價(jià)值,感興趣的小伙伴們可以參考一下2019-06-06Mac OS X10.9安裝的Python2.7升級(jí)Python3.3步驟詳解
Mac OS X10.9默認(rèn)帶了Python2.7,不過(guò)現(xiàn)在Python3.3.3出來(lái)了,如果想使用最新版本,趕緊升級(jí)下吧。基本步驟如下2013-12-12PyPDF2讀取PDF文件內(nèi)容保存到本地TXT實(shí)例
這篇文章主要介紹了PyPDF2讀取PDF文件內(nèi)容保存到本地TXT實(shí)例,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過(guò)來(lái)看看吧2020-05-05python教程十行代碼教你語(yǔ)音轉(zhuǎn)文字QQ微信聊天
QQ上面發(fā)的語(yǔ)音消息是可以直接文字識(shí)別的,但是微信為什么沒(méi)有呢?是因?yàn)榧夹g(shù)太難實(shí)現(xiàn)嗎?這個(gè)很簡(jiǎn)單啊!今天給大家介紹一下語(yǔ)音轉(zhuǎn)文字的原理2021-09-09flask中主動(dòng)拋出異常及統(tǒng)一異常處理代碼示例
這篇文章主要介紹了flask中主動(dòng)拋出異常及統(tǒng)一異常處理代碼示例,具有一定借鑒價(jià)值,需要的朋友可以參考下2018-01-01python list使用示例 list中找連續(xù)的數(shù)字
這篇文章主要介紹了list中找連續(xù)的數(shù)字的示例,大家參考使用吧2014-01-01