Python利用keras接口實(shí)現(xiàn)深度神經(jīng)網(wǎng)絡(luò)回歸
1 寫在前面
前期一篇文章Python TensorFlow深度學(xué)習(xí)回歸代碼:DNNRegressor詳細(xì)介紹了基于TensorFlow tf.estimator
接口的深度學(xué)習(xí)網(wǎng)絡(luò);而在TensorFlow 2.0
中,新的Keras
接口具有與 tf.estimator
接口一致的功能,且其更易于學(xué)習(xí),對(duì)于新手而言友好程度更高;在TensorFlow官網(wǎng)也建議新手從Keras
接口入手開始學(xué)習(xí)。因此,本文結(jié)合TensorFlow Keras
接口,加以深度學(xué)習(xí)回歸的詳細(xì)介紹與代碼實(shí)戰(zhàn)。
和上述博客類似,本文第二部分為代碼的分解介紹,第三部分為完整代碼。一些在上述博客介紹過的內(nèi)容,在本文中就省略了,大家如果有需要可以先查看上述文章Python TensorFlow深度學(xué)習(xí)回歸代碼:DNNRegressor。
相關(guān)版本信息:Python版本:3.8.5
;TensorFlow版本:2.4.1
;編譯器版本:Spyder 4.1.5
。
2 代碼分解介紹
2.1 準(zhǔn)備工作
首先需要引入相關(guān)的庫與包。
import os import glob import openpyxl import numpy as np import pandas as pd import seaborn as sns import tensorflow as tf import scipy.stats as stats import matplotlib.pyplot as plt from sklearn import metrics from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras import regularizers from tensorflow.keras.callbacks import ModelCheckpoint from tensorflow.keras.layers.experimental import preprocessing
由于后續(xù)代碼執(zhí)行過程中,會(huì)有很多數(shù)據(jù)的展示與輸出,其中多數(shù)數(shù)據(jù)都帶有小數(shù)部分;為了讓程序所顯示的數(shù)據(jù)更為整齊、規(guī)范,我們可以對(duì)代碼的浮點(diǎn)數(shù)、數(shù)組與NumPy
對(duì)象對(duì)應(yīng)的顯示規(guī)則加以約束。
np.set_printoptions(precision=4,suppress=True)
其中,precision
設(shè)置小數(shù)點(diǎn)后顯示的位數(shù),默認(rèn)為8
;suppress
表示是否使用定點(diǎn)計(jì)數(shù)法(即與科學(xué)計(jì)數(shù)法相對(duì))。
2.2 參數(shù)配置
深度學(xué)習(xí)代碼一大特點(diǎn)即為具有較多的參數(shù)需要我們手動(dòng)定義。為避免調(diào)參時(shí)上下翻找,我們可以將主要的參數(shù)集中在一起,方便我們后期調(diào)整。
其中,具體參數(shù)的含義在本文后續(xù)部分詳細(xì)介紹。
# Input parameters. DataPath="G:/CropYield/03_DL/00_Data/AllDataAll.csv" ModelPath="G:/CropYield/03_DL/02_DNNModle" CheckPointPath="G:/CropYield/03_DL/02_DNNModle/Weights" CheckPointName=CheckPointPath+"/Weights_{epoch:03d}_{val_loss:.4f}.hdf5" ParameterPath="G:/CropYield/03_DL/03_OtherResult/ParameterResult.xlsx" TrainFrac=0.8 RandomSeed=np.random.randint(low=21,high=22) CheckPointMethod='val_loss' HiddenLayer=[64,128,256,512,512,1024,1024] RegularizationFactor=0.0001 ActivationMethod='relu' DropoutValue=[0.5,0.5,0.5,0.3,0.3,0.3,0.2] OutputLayerActMethod='linear' LossMethod='mean_absolute_error' LearnRate=0.005 LearnDecay=0.0005 FitEpoch=500 BatchSize=9999 ValFrac=0.2 BestEpochOptMethod='adam'
2.3 數(shù)據(jù)導(dǎo)入與數(shù)據(jù)劃分
我的數(shù)據(jù)已經(jīng)保存在了.csv
文件中,因此可以用pd.read_csv
直接讀取。
其中,數(shù)據(jù)的每一列是一個(gè)特征,每一行是全部特征與因變量(就是下面的Yield
)組合成的樣本。
# Fetch and divide data. MyData=pd.read_csv(DataPath,names=['EVI0610','EVI0626','EVI0712','EVI0728','EVI0813','EVI0829', 'EVI0914','EVI0930','EVI1016','Lrad06','Lrad07','Lrad08', 'Lrad09','Lrad10','Prec06','Prec07','Prec08','Prec09', 'Prec10','Pres06','Pres07','Pres08','Pres09','Pres10', 'SIF161','SIF177','SIF193','SIF209','SIF225','SIF241', 'SIF257','SIF273','SIF289','Shum06','Shum07','Shum08', 'Shum09','Shum10','SoilType','Srad06','Srad07','Srad08', 'Srad09','Srad10','Temp06','Temp07','Temp08','Temp09', 'Temp10','Wind06','Wind07','Wind08','Wind09','Wind10', 'Yield'],header=0)
隨后,對(duì)導(dǎo)入的數(shù)據(jù)劃分訓(xùn)練集與測(cè)試集。
TrainData=MyData.sample(frac=TrainFrac,random_state=RandomSeed) TestData=MyData.drop(TrainData.index)
其中,TrainFrac
為訓(xùn)練集(包括驗(yàn)證數(shù)據(jù))所占比例,RandomSeed
為隨即劃分?jǐn)?shù)據(jù)時(shí)所用的隨機(jī)數(shù)種子。
2.4 聯(lián)合分布圖繪制
在開始深度學(xué)習(xí)前,我們可以分別對(duì)輸入數(shù)據(jù)的不同特征與因變量的關(guān)系加以查看。繪制聯(lián)合分布圖就是一種比較好的查看多個(gè)變量之間關(guān)系的方法。我們用seaborn
來實(shí)現(xiàn)這一過程。seaborn
是一個(gè)基于matplotlib
的Python
數(shù)據(jù)可視化庫,使得我們可以通過較為簡(jiǎn)單的操作,繪制出動(dòng)人的圖片。代碼如下:
# Draw the joint distribution image. def JointDistribution(Factors): plt.figure(1) sns.pairplot(TrainData[Factors],kind='reg',diag_kind='kde') sns.set(font_scale=2.0) DataDistribution=TrainData.describe().transpose() # Draw the joint distribution image. JointFactor=['Lrad07','Prec06','SIF161','Shum06','Srad07','Srad08','Srad10','Temp06','Yield'] JointDistribution(JointFactor)
其中,JointFactor
為需要繪制聯(lián)合分布圖的特征名稱,JointDistribution
函數(shù)中的kind
表示聯(lián)合分布圖中非對(duì)角線圖的類型,可選'reg'
與'scatter'
、'kde'
、'hist'
,'reg'
代表在圖片中加入一條擬合直線,'scatter'
就是不加入這條直線,'kde'
是等高線的形式,'hist'
就是類似于柵格地圖的形式;diag_kind
表示聯(lián)合分布圖中對(duì)角線圖的類型,可選'hist'
與'kde'
,'hist'
代表直方圖,'kde'
代表直方圖曲線化。font_scale
是圖中的字體大小。JointDistribution
函數(shù)中最后一句是用來展示TrainData
中每一項(xiàng)特征數(shù)據(jù)的統(tǒng)計(jì)信息,包括最大值、最小值、平均值、分位數(shù)等。
圖片繪制的示例如下:
要注意,繪制聯(lián)合分布圖比較慢,建議大家不要選取太多的變量,否則程序會(huì)卡在這里比較長(zhǎng)的時(shí)間。
2.5 因變量分離與數(shù)據(jù)標(biāo)準(zhǔn)化
因變量分離我們就不再多解釋啦;接下來,我們要知道,對(duì)于機(jī)器學(xué)習(xí)、深度學(xué)習(xí)而言,數(shù)據(jù)標(biāo)準(zhǔn)化是十分重要的——用官網(wǎng)所舉的一個(gè)例子:不同的特征在神經(jīng)網(wǎng)絡(luò)中會(huì)乘以相同的權(quán)重weight,因此輸入數(shù)據(jù)的尺度(即數(shù)據(jù)不同特征之間的大小關(guān)系)將會(huì)影響到輸出數(shù)據(jù)與梯度的尺度;因此,數(shù)據(jù)標(biāo)準(zhǔn)化可以使得模型更加穩(wěn)定。
在這里,首先說明數(shù)據(jù)標(biāo)準(zhǔn)化與歸一化的區(qū)別。
標(biāo)準(zhǔn)化即將訓(xùn)練集中某列的值縮放成均值為0,方差為1的狀態(tài);而歸一化是將訓(xùn)練集中某列的值縮放到0和1之間。而在機(jī)器學(xué)習(xí)中,標(biāo)準(zhǔn)化較之歸一化通常具有更高的使用頻率,且標(biāo)準(zhǔn)化后的數(shù)據(jù)在神經(jīng)網(wǎng)絡(luò)訓(xùn)練時(shí),其收斂將會(huì)更快。
最后,一定要記得——標(biāo)準(zhǔn)化時(shí)只需要對(duì)訓(xùn)練集數(shù)據(jù)加以處理,不要把測(cè)試集Test
的數(shù)據(jù)引入了!因?yàn)闃?biāo)準(zhǔn)化只需要對(duì)訓(xùn)練數(shù)據(jù)加以處理,引入測(cè)試集反而會(huì)影響標(biāo)準(zhǔn)化的作用。
# Separate independent and dependent variables. TrainX=TrainData.copy(deep=True) TestX=TestData.copy(deep=True) TrainY=TrainX.pop('Yield') TestY=TestX.pop('Yield') # Standardization data. Normalizer=preprocessing.Normalization() Normalizer.adapt(np.array(TrainX))
在這里,我們直接運(yùn)用preprocessing.Normalization()
建立一個(gè)預(yù)處理層,其具有數(shù)據(jù)標(biāo)準(zhǔn)化的功能;隨后,通過.adapt()
函數(shù)將需要標(biāo)準(zhǔn)化的數(shù)據(jù)(即訓(xùn)練集的自變量)放入這一層,便可以實(shí)現(xiàn)數(shù)據(jù)的標(biāo)準(zhǔn)化操作。
2.6 原有模型刪除
我們的程序每執(zhí)行一次,便會(huì)在指定路徑中保存當(dāng)前運(yùn)行的模型。為保證下一次模型保存時(shí)不受上一次模型運(yùn)行結(jié)果干擾,我們可以將模型文件夾內(nèi)的全部文件刪除。
# Delete the model result from the last run. def DeleteOldModel(ModelPath): AllFileName=os.listdir(ModelPath) for i in AllFileName: NewPath=os.path.join(ModelPath,i) if os.path.isdir(NewPath): DeleteOldModel(NewPath) else: os.remove(NewPath) # Delete the model result from the last run. DeleteOldModel(ModelPath)
這一部分的代碼在文章Python TensorFlow深度學(xué)習(xí)回歸代碼:DNNRegressor有詳細(xì)的講解,這里就不再重復(fù)。
2.7 最優(yōu)Epoch保存與讀取
在我們訓(xùn)練模型的過程中,會(huì)讓模型運(yùn)行幾百個(gè)Epoch
(一個(gè)Epoch
即全部訓(xùn)練集數(shù)據(jù)樣本均進(jìn)入模型訓(xùn)練一次);而由于每一次的Epoch
所得到的精度都不一樣,那么我們自然需要挑出幾百個(gè)Epoch
中最優(yōu)秀的那一個(gè)Epoch
。
# Find and save optimal epoch. def CheckPoint(Name): Checkpoint=ModelCheckpoint(Name, monitor=CheckPointMethod, verbose=1, save_best_only=True, mode='auto') CallBackList=[Checkpoint] return CallBackList # Find and save optimal epochs. CallBack=CheckPoint(CheckPointName)
其中,Name
就是保存Epoch
的路徑與文件名命名方法;monitor
是我們挑選最優(yōu)Epoch
的依據(jù),在這里我們用驗(yàn)證集數(shù)據(jù)對(duì)應(yīng)的誤差來判斷這個(gè)Epoch
是不是我們想要的;verbose
用來設(shè)置輸出日志的內(nèi)容,我們用1
就好;save_best_only
用來確定我們是否只保存被認(rèn)定為最優(yōu)的Epoch
;mode
用以判斷我們的monitor
是越大越好還是越小越好,前面提到了我們的monitor
是驗(yàn)證集數(shù)據(jù)對(duì)應(yīng)的誤差,那么肯定是誤差越小越好,所以這里可以用'auto'
或'min'
,其中'auto'
是模型自己根據(jù)用戶選擇的monitor
方法來判斷越大越好還是越小越好。
找到最優(yōu)Epoch
后,將其傳遞給CallBack
。需要注意的是,這里的最優(yōu)Epoch
是多個(gè)Epoch
——因?yàn)槊恳淮?code>Epoch只要獲得了當(dāng)前模型所遇到的最優(yōu)解,它就會(huì)保存;下一次再遇見一個(gè)更好的解時(shí),同樣保存,且不覆蓋上一次的Epoch
??梢赃@么理解,假如一共有三次Epoch
,所得到的誤差分別為5
,7
,4
;那么我們保存的Epoch
就是第一次和第三次。
2.8 模型構(gòu)建
Keras
接口下的模型構(gòu)建就很清晰明了了。相信大家在看了前期一篇文章Python TensorFlow深度學(xué)習(xí)回歸代碼:DNNRegressor后,結(jié)合代碼旁的注釋就理解啦。
# Build DNN model. def BuildModel(Norm): Model=keras.Sequential([Norm, # 數(shù)據(jù)標(biāo)準(zhǔn)化層 layers.Dense(HiddenLayer[0], # 指定隱藏層1的神經(jīng)元個(gè)數(shù) kernel_regularizer=regularizers.l2(RegularizationFactor), # 運(yùn)用L2正則化 # activation=ActivationMethod ), layers.LeakyReLU(), # 引入LeakyReLU這一改良的ReLU激活函數(shù),從而加快模型收斂,減少過擬合 layers.BatchNormalization(), # 引入Batch Normalizing,加快網(wǎng)絡(luò)收斂與增強(qiáng)網(wǎng)絡(luò)穩(wěn)固性 layers.Dropout(DropoutValue[0]), # 指定隱藏層1的Dropout值 layers.Dense(HiddenLayer[1], kernel_regularizer=regularizers.l2(RegularizationFactor), # activation=ActivationMethod ), layers.LeakyReLU(), layers.BatchNormalization(), layers.Dropout(DropoutValue[1]), layers.Dense(HiddenLayer[2], kernel_regularizer=regularizers.l2(RegularizationFactor), # activation=ActivationMethod ), layers.LeakyReLU(), layers.BatchNormalization(), layers.Dropout(DropoutValue[2]), layers.Dense(HiddenLayer[3], kernel_regularizer=regularizers.l2(RegularizationFactor), # activation=ActivationMethod ), layers.LeakyReLU(), layers.BatchNormalization(), layers.Dropout(DropoutValue[3]), layers.Dense(HiddenLayer[4], kernel_regularizer=regularizers.l2(RegularizationFactor), # activation=ActivationMethod ), layers.LeakyReLU(), layers.BatchNormalization(), layers.Dropout(DropoutValue[4]), layers.Dense(HiddenLayer[5], kernel_regularizer=regularizers.l2(RegularizationFactor), # activation=ActivationMethod ), layers.LeakyReLU(), layers.BatchNormalization(), layers.Dropout(DropoutValue[5]), layers.Dense(HiddenLayer[6], kernel_regularizer=regularizers.l2(RegularizationFactor), # activation=ActivationMethod ), layers.LeakyReLU(), # If batch normalization is set in the last hidden layer, the error image # will show a trend of first stable and then decline; otherwise, it will # decline and then stable. # layers.BatchNormalization(), layers.Dropout(DropoutValue[6]), layers.Dense(units=1, activation=OutputLayerActMethod)]) # 最后一層就是輸出層 Model.compile(loss=LossMethod, # 指定每個(gè)批次訓(xùn)練誤差的減小方法 optimizer=tf.keras.optimizers.Adam(learning_rate=LearnRate,decay=LearnDecay)) # 運(yùn)用學(xué)習(xí)率下降的優(yōu)化方法 return Model # Build DNN regression model. DNNModel=BuildModel(Normalizer) DNNModel.summary() DNNHistory=DNNModel.fit(TrainX, TrainY, epochs=FitEpoch, # batch_size=BatchSize, verbose=1, callbacks=CallBack, validation_split=ValFrac)
在這里,.summary()
查看模型摘要,validation_split
為在訓(xùn)練數(shù)據(jù)中,取出ValFrac
所指定比例的一部分作為驗(yàn)證數(shù)據(jù)。DNNHistory
則記錄了模型訓(xùn)練過程中的各類指標(biāo)變化情況,接下來我們可以基于其繪制模型訓(xùn)練過程的誤差變化圖像。
2.9 訓(xùn)練圖像繪制
機(jī)器學(xué)習(xí)中,過擬合是影響訓(xùn)練精度的重要因素。因此,我們最好在訓(xùn)練模型的過程中繪制訓(xùn)練數(shù)據(jù)、驗(yàn)證數(shù)據(jù)的誤差變化圖象,從而更好獲取模型的訓(xùn)練情況。
# Draw error image. def LossPlot(History): plt.figure(2) plt.plot(History.history['loss'],label='loss') plt.plot(History.history['val_loss'],label='val_loss') plt.ylim([0,4000]) plt.xlabel('Epoch') plt.ylabel('Error') plt.legend() plt.grid(True) # Draw error image. LossPlot(DNNHistory)
其中,'loss'
與'val_loss'
分別是模型訓(xùn)練過程中,訓(xùn)練集、驗(yàn)證集對(duì)應(yīng)的誤差;如果訓(xùn)練集誤差明顯小于驗(yàn)證集誤差,就說明模型出現(xiàn)了過擬合。
2.10 最優(yōu)Epoch選取
前面提到了,我們將多個(gè)符合要求的Epoch
保存在了指定的路徑下,那么最終我們可以從中選取最好的那個(gè)Epoch
,作為模型的最終參數(shù),從而對(duì)測(cè)試集數(shù)據(jù)加以預(yù)測(cè)。那么在這里,我們需要將這一全局最優(yōu)Epoch
選取出,并帶入到最終的模型里。
# Optimize the model based on optimal epoch. def BestEpochIntoModel(Path,Model): EpochFile=glob.glob(Path+'/*') BestEpoch=max(EpochFile,key=os.path.getmtime) Model.load_weights(BestEpoch) Model.compile(loss=LossMethod, optimizer=BestEpochOptMethod) return Model # Optimize the model based on optimal epoch. DNNModel=BestEpochIntoModel(CheckPointPath,DNNModel)
總的來說,這里就是運(yùn)用了os.path.getmtime
模塊,將我們存儲(chǔ)Epoch
的文件夾中最新的那個(gè)Epoch
挑出來——這一Epoch
就是使得驗(yàn)證集數(shù)據(jù)誤差最小的全局最優(yōu)Epoch
;并通過load_weights
將這一Epoch
對(duì)應(yīng)的模型參數(shù)引入模型。
2.11 模型測(cè)試、擬合圖像繪制、精度驗(yàn)證與模型參數(shù)與結(jié)果保存
前期一篇文章Python TensorFlow深度學(xué)習(xí)回歸代碼:DNNRegressor中有相關(guān)的代碼講解內(nèi)容,因此這里就不再贅述啦。
# Draw Test image. def TestPlot(TestY,TestPrediction): plt.figure(3) ax=plt.axes(aspect='equal') plt.scatter(TestY,TestPrediction) plt.xlabel('True Values') plt.ylabel('Predictions') Lims=[0,10000] plt.xlim(Lims) plt.ylim(Lims) plt.plot(Lims,Lims) plt.grid(False) # Verify the accuracy and draw error hist image. def AccuracyVerification(TestY,TestPrediction): DNNError=TestPrediction-TestY plt.figure(4) plt.hist(DNNError,bins=30) plt.xlabel('Prediction Error') plt.ylabel('Count') plt.grid(False) Pearsonr=stats.pearsonr(TestY,TestPrediction) R2=metrics.r2_score(TestY,TestPrediction) RMSE=metrics.mean_squared_error(TestY,TestPrediction)**0.5 print('Pearson correlation coefficient is {0}, and RMSE is {1}.'.format(Pearsonr[0],RMSE)) return (Pearsonr[0],R2,RMSE) # Save key parameters. def WriteAccuracy(*WriteVar): ExcelData=openpyxl.load_workbook(WriteVar[0]) SheetName=ExcelData.get_sheet_names() WriteSheet=ExcelData.get_sheet_by_name(SheetName[0]) WriteSheet=ExcelData.active MaxRowNum=WriteSheet.max_row for i in range(len(WriteVar)-1): exec("WriteSheet.cell(MaxRowNum+1,i+1).value=WriteVar[i+1]") ExcelData.save(WriteVar[0]) # Predict test set data. TestPrediction=DNNModel.predict(TestX).flatten() # Draw Test image. TestPlot(TestY,TestPrediction) # Verify the accuracy and draw error hist image. AccuracyResult=AccuracyVerification(TestY,TestPrediction) PearsonR,R2,RMSE=AccuracyResult[0],AccuracyResult[1],AccuracyResult[2] # Save model and key parameters. DNNModel.save(ModelPath) WriteAccuracy(ParameterPath,PearsonR,R2,RMSE,TrainFrac,RandomSeed,CheckPointMethod, ','.join('%s' %i for i in HiddenLayer),RegularizationFactor, ActivationMethod,','.join('%s' %i for i in DropoutValue),OutputLayerActMethod, LossMethod,LearnRate,LearnDecay,FitEpoch,BatchSize,ValFrac,BestEpochOptMethod)
得到擬合圖像如下:
得到誤差分布直方圖如下:
至此,代碼的分解介紹就結(jié)束啦~
3 完整代碼
# -*- coding: utf-8 -*- """ Created on Tue Feb 24 12:42:17 2021 @author: fkxxgis """ import os import glob import openpyxl import numpy as np import pandas as pd import seaborn as sns import tensorflow as tf import scipy.stats as stats import matplotlib.pyplot as plt from sklearn import metrics from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras import regularizers from tensorflow.keras.callbacks import ModelCheckpoint from tensorflow.keras.layers.experimental import preprocessing np.set_printoptions(precision=4,suppress=True) # Draw the joint distribution image. def JointDistribution(Factors): plt.figure(1) sns.pairplot(TrainData[Factors],kind='reg',diag_kind='kde') sns.set(font_scale=2.0) DataDistribution=TrainData.describe().transpose() # Delete the model result from the last run. def DeleteOldModel(ModelPath): AllFileName=os.listdir(ModelPath) for i in AllFileName: NewPath=os.path.join(ModelPath,i) if os.path.isdir(NewPath): DeleteOldModel(NewPath) else: os.remove(NewPath) # Find and save optimal epoch. def CheckPoint(Name): Checkpoint=ModelCheckpoint(Name, monitor=CheckPointMethod, verbose=1, save_best_only=True, mode='auto') CallBackList=[Checkpoint] return CallBackList # Build DNN model. def BuildModel(Norm): Model=keras.Sequential([Norm, # 數(shù)據(jù)標(biāo)準(zhǔn)化層 layers.Dense(HiddenLayer[0], # 指定隱藏層1的神經(jīng)元個(gè)數(shù) kernel_regularizer=regularizers.l2(RegularizationFactor), # 運(yùn)用L2正則化 # activation=ActivationMethod ), layers.LeakyReLU(), # 引入LeakyReLU這一改良的ReLU激活函數(shù),從而加快模型收斂,減少過擬合 layers.BatchNormalization(), # 引入Batch Normalizing,加快網(wǎng)絡(luò)收斂與增強(qiáng)網(wǎng)絡(luò)穩(wěn)固性 layers.Dropout(DropoutValue[0]), # 指定隱藏層1的Dropout值 layers.Dense(HiddenLayer[1], kernel_regularizer=regularizers.l2(RegularizationFactor), # activation=ActivationMethod ), layers.LeakyReLU(), layers.BatchNormalization(), layers.Dropout(DropoutValue[1]), layers.Dense(HiddenLayer[2], kernel_regularizer=regularizers.l2(RegularizationFactor), # activation=ActivationMethod ), layers.LeakyReLU(), layers.BatchNormalization(), layers.Dropout(DropoutValue[2]), layers.Dense(HiddenLayer[3], kernel_regularizer=regularizers.l2(RegularizationFactor), # activation=ActivationMethod ), layers.LeakyReLU(), layers.BatchNormalization(), layers.Dropout(DropoutValue[3]), layers.Dense(HiddenLayer[4], kernel_regularizer=regularizers.l2(RegularizationFactor), # activation=ActivationMethod ), layers.LeakyReLU(), layers.BatchNormalization(), layers.Dropout(DropoutValue[4]), layers.Dense(HiddenLayer[5], kernel_regularizer=regularizers.l2(RegularizationFactor), # activation=ActivationMethod ), layers.LeakyReLU(), layers.BatchNormalization(), layers.Dropout(DropoutValue[5]), layers.Dense(HiddenLayer[6], kernel_regularizer=regularizers.l2(RegularizationFactor), # activation=ActivationMethod ), layers.LeakyReLU(), # If batch normalization is set in the last hidden layer, the error image # will show a trend of first stable and then decline; otherwise, it will # decline and then stable. # layers.BatchNormalization(), layers.Dropout(DropoutValue[6]), layers.Dense(units=1, activation=OutputLayerActMethod)]) # 最后一層就是輸出層 Model.compile(loss=LossMethod, # 指定每個(gè)批次訓(xùn)練誤差的減小方法 optimizer=tf.keras.optimizers.Adam(learning_rate=LearnRate,decay=LearnDecay)) # 運(yùn)用學(xué)習(xí)率下降的優(yōu)化方法 return Model # Draw error image. def LossPlot(History): plt.figure(2) plt.plot(History.history['loss'],label='loss') plt.plot(History.history['val_loss'],label='val_loss') plt.ylim([0,4000]) plt.xlabel('Epoch') plt.ylabel('Error') plt.legend() plt.grid(True) # Optimize the model based on optimal epoch. def BestEpochIntoModel(Path,Model): EpochFile=glob.glob(Path+'/*') BestEpoch=max(EpochFile,key=os.path.getmtime) Model.load_weights(BestEpoch) Model.compile(loss=LossMethod, optimizer=BestEpochOptMethod) return Model # Draw Test image. def TestPlot(TestY,TestPrediction): plt.figure(3) ax=plt.axes(aspect='equal') plt.scatter(TestY,TestPrediction) plt.xlabel('True Values') plt.ylabel('Predictions') Lims=[0,10000] plt.xlim(Lims) plt.ylim(Lims) plt.plot(Lims,Lims) plt.grid(False) # Verify the accuracy and draw error hist image. def AccuracyVerification(TestY,TestPrediction): DNNError=TestPrediction-TestY plt.figure(4) plt.hist(DNNError,bins=30) plt.xlabel('Prediction Error') plt.ylabel('Count') plt.grid(False) Pearsonr=stats.pearsonr(TestY,TestPrediction) R2=metrics.r2_score(TestY,TestPrediction) RMSE=metrics.mean_squared_error(TestY,TestPrediction)**0.5 print('Pearson correlation coefficient is {0}, and RMSE is {1}.'.format(Pearsonr[0],RMSE)) return (Pearsonr[0],R2,RMSE) # Save key parameters. def WriteAccuracy(*WriteVar): ExcelData=openpyxl.load_workbook(WriteVar[0]) SheetName=ExcelData.get_sheet_names() WriteSheet=ExcelData.get_sheet_by_name(SheetName[0]) WriteSheet=ExcelData.active MaxRowNum=WriteSheet.max_row for i in range(len(WriteVar)-1): exec("WriteSheet.cell(MaxRowNum+1,i+1).value=WriteVar[i+1]") ExcelData.save(WriteVar[0]) # Input parameters. DataPath="G:/CropYield/03_DL/00_Data/AllDataAll.csv" ModelPath="G:/CropYield/03_DL/02_DNNModle" CheckPointPath="G:/CropYield/03_DL/02_DNNModle/Weights" CheckPointName=CheckPointPath+"/Weights_{epoch:03d}_{val_loss:.4f}.hdf5" ParameterPath="G:/CropYield/03_DL/03_OtherResult/ParameterResult.xlsx" TrainFrac=0.8 RandomSeed=np.random.randint(low=21,high=22) CheckPointMethod='val_loss' HiddenLayer=[64,128,256,512,512,1024,1024] RegularizationFactor=0.0001 ActivationMethod='relu' DropoutValue=[0.5,0.5,0.5,0.3,0.3,0.3,0.2] OutputLayerActMethod='linear' LossMethod='mean_absolute_error' LearnRate=0.005 LearnDecay=0.0005 FitEpoch=500 BatchSize=9999 ValFrac=0.2 BestEpochOptMethod='adam' # Fetch and divide data. MyData=pd.read_csv(DataPath,names=['EVI0610','EVI0626','EVI0712','EVI0728','EVI0813','EVI0829', 'EVI0914','EVI0930','EVI1016','Lrad06','Lrad07','Lrad08', 'Lrad09','Lrad10','Prec06','Prec07','Prec08','Prec09', 'Prec10','Pres06','Pres07','Pres08','Pres09','Pres10', 'SIF161','SIF177','SIF193','SIF209','SIF225','SIF241', 'SIF257','SIF273','SIF289','Shum06','Shum07','Shum08', 'Shum09','Shum10','SoilType','Srad06','Srad07','Srad08', 'Srad09','Srad10','Temp06','Temp07','Temp08','Temp09', 'Temp10','Wind06','Wind07','Wind08','Wind09','Wind10', 'Yield'],header=0) TrainData=MyData.sample(frac=TrainFrac,random_state=RandomSeed) TestData=MyData.drop(TrainData.index) # Draw the joint distribution image. # JointFactor=['Lrad07','Prec06','SIF161','Shum06','Srad07','Srad08','Srad10','Temp06','Yield'] # JointDistribution(JointFactor) # Separate independent and dependent variables. TrainX=TrainData.copy(deep=True) TestX=TestData.copy(deep=True) TrainY=TrainX.pop('Yield') TestY=TestX.pop('Yield') # Standardization data. Normalizer=preprocessing.Normalization() Normalizer.adapt(np.array(TrainX)) # Delete the model result from the last run. DeleteOldModel(ModelPath) # Find and save optimal epochs. CallBack=CheckPoint(CheckPointName) # Build DNN regression model. DNNModel=BuildModel(Normalizer) DNNModel.summary() DNNHistory=DNNModel.fit(TrainX, TrainY, epochs=FitEpoch, # batch_size=BatchSize, verbose=1, callbacks=CallBack, validation_split=ValFrac) # Draw error image. LossPlot(DNNHistory) # Optimize the model based on optimal epoch. DNNModel=BestEpochIntoModel(CheckPointPath,DNNModel) # Predict test set data. TestPrediction=DNNModel.predict(TestX).flatten() # Draw Test image. TestPlot(TestY,TestPrediction) # Verify the accuracy and draw error hist image. AccuracyResult=AccuracyVerification(TestY,TestPrediction) PearsonR,R2,RMSE=AccuracyResult[0],AccuracyResult[1],AccuracyResult[2] # Save model and key parameters. DNNModel.save(ModelPath) WriteAccuracy(ParameterPath,PearsonR,R2,RMSE,TrainFrac,RandomSeed,CheckPointMethod, ','.join('%s' %i for i in HiddenLayer),RegularizationFactor, ActivationMethod,','.join('%s' %i for i in DropoutValue),OutputLayerActMethod, LossMethod,LearnRate,LearnDecay,FitEpoch,BatchSize,ValFrac,BestEpochOptMethod)
以上就是Python利用keras接口實(shí)現(xiàn)深度神經(jīng)網(wǎng)絡(luò)回歸的詳細(xì)內(nèi)容,更多關(guān)于Python keras深度神經(jīng)網(wǎng)絡(luò)回歸的資料請(qǐng)關(guān)注腳本之家其它相關(guān)文章!
相關(guān)文章
Python繪制圣誕樹+落葉+雪花+背景音樂+浪漫彈窗?五合一版圣誕樹
馬上不就到圣誕節(jié)了嘛,我看到朋友圈里很多小伙伴再紛紛炫耀自己收到的專屬圣誕樹,今天小編給大家介紹的是通過Python繪制的五合一版圣誕樹:圣誕樹+落葉+雪花+背景音樂+浪漫彈窗。感興趣的小伙伴快來學(xué)習(xí)一下吧2021-12-12Python實(shí)現(xiàn)自動(dòng)玩連連看的腳本分享
最近女朋友在玩連連看,玩了一個(gè)星期了還沒通關(guān),真的是菜。實(shí)在是看不過去了,直接用python寫了個(gè)腳本代碼可以自動(dòng)玩連連看,感興趣的可以了解一下2022-04-04python實(shí)現(xiàn)名片管理系統(tǒng)項(xiàng)目
這篇文章主要為大家詳細(xì)介紹了python實(shí)現(xiàn)名片管理系統(tǒng)項(xiàng)目,具有一定的參考價(jià)值,感興趣的小伙伴們可以參考一下2019-04-04使用Python腳本來控制Windows Azure的簡(jiǎn)單教程
這篇文章主要介紹了使用Python腳本來控制Windows Azure的簡(jiǎn)單教程,由于微軟官方提供了Python SDK,使得用戶自己用Python控制Azure成為了可能,需要的朋友可以參考下2015-04-04解決在keras中使用model.save()函數(shù)保存模型失敗的問題
這篇文章主要介紹了解決在keras中使用model.save()函數(shù)保存模型失敗的問題,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過來看看吧2020-05-05Python multiprocessing模塊中的Pipe管道使用實(shí)例
這篇文章主要介紹了Python multiprocessing模塊中的Pipe管道使用實(shí)例,本文直接給出使用實(shí)例,需要的朋友可以參考下2015-04-04在Linux系統(tǒng)上通過uWSGI配置Nginx+Python環(huán)境的教程
這篇文章主要介紹了在Linux系統(tǒng)上通過uWSGI配置Nginx+Python環(huán)境的教程,示例中為Ubuntu系統(tǒng)并且默認(rèn)使用系統(tǒng)中自帶的Python解釋器,需要的朋友可以參考下2015-12-12python使用matplotlib畫出的圖怎樣放到word中
這篇文章主要介紹了python使用matplotlib畫出的圖怎樣放到word中問題,具有很好的參考價(jià)值,希望對(duì)大家有所幫助,如有錯(cuò)誤或未考慮完全的地方,望不吝賜教2023-09-09Python內(nèi)置函數(shù)delattr的具體用法
本篇文章主要介紹了Python內(nèi)置函數(shù)delattr的具體用法,小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,也給大家做個(gè)參考。一起跟隨小編過來看看吧2017-11-11