pytorch構(gòu)建網(wǎng)絡(luò)模型的4種方法
利用pytorch來(lái)構(gòu)建網(wǎng)絡(luò)模型有很多種方法,以下簡(jiǎn)單列出其中的四種。
假設(shè)構(gòu)建一個(gè)網(wǎng)絡(luò)模型如下:
卷積層--》Relu層--》池化層--》全連接層--》Relu層--》全連接層
首先導(dǎo)入幾種方法用到的包:
import torch import torch.nn.functional as F from collections import OrderedDict
第一種方法
# Method 1 ----------------------------------------- class Net1(torch.nn.Module): def __init__(self): super(Net1, self).__init__() self.conv1 = torch.nn.Conv2d(3, 32, 3, 1, 1) self.dense1 = torch.nn.Linear(32 * 3 * 3, 128) self.dense2 = torch.nn.Linear(128, 10) def forward(self, x): x = F.max_pool2d(F.relu(self.conv(x)), 2) x = x.view(x.size(0), -1) x = F.relu(self.dense1(x)) x = self.dense2(x) return x print("Method 1:") model1 = Net1() print(model1)
這種方法比較常用,早期的教程通常就是使用這種方法。
第二種方法
# Method 2 ------------------------------------------ class Net2(torch.nn.Module): def __init__(self): super(Net2, self).__init__() self.conv = torch.nn.Sequential( torch.nn.Conv2d(3, 32, 3, 1, 1), torch.nn.ReLU(), torch.nn.MaxPool2d(2)) self.dense = torch.nn.Sequential( torch.nn.Linear(32 * 3 * 3, 128), torch.nn.ReLU(), torch.nn.Linear(128, 10) ) def forward(self, x): conv_out = self.conv1(x) res = conv_out.view(conv_out.size(0), -1) out = self.dense(res) return out print("Method 2:") model2 = Net2() print(model2)
這種方法利用torch.nn.Sequential()容器進(jìn)行快速搭建,模型的各層被順序添加到容器中。缺點(diǎn)是每層的編號(hào)是默認(rèn)的阿拉伯?dāng)?shù)字,不易區(qū)分。
第三種方法:
# Method 3 ------------------------------- class Net3(torch.nn.Module): def __init__(self): super(Net3, self).__init__() self.conv=torch.nn.Sequential() self.conv.add_module("conv1",torch.nn.Conv2d(3, 32, 3, 1, 1)) self.conv.add_module("relu1",torch.nn.ReLU()) self.conv.add_module("pool1",torch.nn.MaxPool2d(2)) self.dense = torch.nn.Sequential() self.dense.add_module("dense1",torch.nn.Linear(32 * 3 * 3, 128)) self.dense.add_module("relu2",torch.nn.ReLU()) self.dense.add_module("dense2",torch.nn.Linear(128, 10)) def forward(self, x): conv_out = self.conv1(x) res = conv_out.view(conv_out.size(0), -1) out = self.dense(res) return out print("Method 3:") model3 = Net3() print(model3)
這種方法是對(duì)第二種方法的改進(jìn):通過(guò)add_module()添加每一層,并且為每一層增加了一個(gè)單獨(dú)的名字。
第四種方法:
# Method 4 ------------------------------------------ class Net4(torch.nn.Module): def __init__(self): super(Net4, self).__init__() self.conv = torch.nn.Sequential( OrderedDict( [ ("conv1", torch.nn.Conv2d(3, 32, 3, 1, 1)), ("relu1", torch.nn.ReLU()), ("pool", torch.nn.MaxPool2d(2)) ] )) self.dense = torch.nn.Sequential( OrderedDict([ ("dense1", torch.nn.Linear(32 * 3 * 3, 128)), ("relu2", torch.nn.ReLU()), ("dense2", torch.nn.Linear(128, 10)) ]) ) def forward(self, x): conv_out = self.conv1(x) res = conv_out.view(conv_out.size(0), -1) out = self.dense(res) return out print("Method 4:") model4 = Net4() print(model4)
是第三種方法的另外一種寫(xiě)法,通過(guò)字典的形式添加每一層,并且設(shè)置單獨(dú)的層名稱。
完整代碼:
import torch import torch.nn.functional as F from collections import OrderedDict # Method 1 ----------------------------------------- class Net1(torch.nn.Module): def __init__(self): super(Net1, self).__init__() self.conv1 = torch.nn.Conv2d(3, 32, 3, 1, 1) self.dense1 = torch.nn.Linear(32 * 3 * 3, 128) self.dense2 = torch.nn.Linear(128, 10) def forward(self, x): x = F.max_pool2d(F.relu(self.conv(x)), 2) x = x.view(x.size(0), -1) x = F.relu(self.dense1(x)) x = self.dense2() return x print("Method 1:") model1 = Net1() print(model1) # Method 2 ------------------------------------------ class Net2(torch.nn.Module): def __init__(self): super(Net2, self).__init__() self.conv = torch.nn.Sequential( torch.nn.Conv2d(3, 32, 3, 1, 1), torch.nn.ReLU(), torch.nn.MaxPool2d(2)) self.dense = torch.nn.Sequential( torch.nn.Linear(32 * 3 * 3, 128), torch.nn.ReLU(), torch.nn.Linear(128, 10) ) def forward(self, x): conv_out = self.conv1(x) res = conv_out.view(conv_out.size(0), -1) out = self.dense(res) return out print("Method 2:") model2 = Net2() print(model2) # Method 3 ------------------------------- class Net3(torch.nn.Module): def __init__(self): super(Net3, self).__init__() self.conv=torch.nn.Sequential() self.conv.add_module("conv1",torch.nn.Conv2d(3, 32, 3, 1, 1)) self.conv.add_module("relu1",torch.nn.ReLU()) self.conv.add_module("pool1",torch.nn.MaxPool2d(2)) self.dense = torch.nn.Sequential() self.dense.add_module("dense1",torch.nn.Linear(32 * 3 * 3, 128)) self.dense.add_module("relu2",torch.nn.ReLU()) self.dense.add_module("dense2",torch.nn.Linear(128, 10)) def forward(self, x): conv_out = self.conv1(x) res = conv_out.view(conv_out.size(0), -1) out = self.dense(res) return out print("Method 3:") model3 = Net3() print(model3) # Method 4 ------------------------------------------ class Net4(torch.nn.Module): def __init__(self): super(Net4, self).__init__() self.conv = torch.nn.Sequential( OrderedDict( [ ("conv1", torch.nn.Conv2d(3, 32, 3, 1, 1)), ("relu1", torch.nn.ReLU()), ("pool", torch.nn.MaxPool2d(2)) ] )) self.dense = torch.nn.Sequential( OrderedDict([ ("dense1", torch.nn.Linear(32 * 3 * 3, 128)), ("relu2", torch.nn.ReLU()), ("dense2", torch.nn.Linear(128, 10)) ]) ) def forward(self, x): conv_out = self.conv1(x) res = conv_out.view(conv_out.size(0), -1) out = self.dense(res) return out print("Method 4:") model4 = Net4() print(model4)
以上就是本文的全部?jī)?nèi)容,希望對(duì)大家的學(xué)習(xí)有所幫助,也希望大家多多支持腳本之家。
相關(guān)文章
解決pycharm編輯區(qū)顯示yaml文件層級(jí)結(jié)構(gòu)遇中文亂碼問(wèn)題
這篇文章主要介紹了解決pycharm編輯區(qū)顯示yaml文件層級(jí)結(jié)構(gòu)遇中文亂碼問(wèn)題,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過(guò)來(lái)看看吧2020-04-04Tensorflow加載模型實(shí)現(xiàn)圖像分類識(shí)別流程詳解
在視覺(jué)領(lǐng)域可以分為:1、圖像分類 2、語(yǔ)義分割 3、實(shí)例分割 4、目標(biāo)檢測(cè)(跟蹤) 5、關(guān)鍵點(diǎn)檢測(cè)。該篇主要講解利用Tensorflow 對(duì)圖像進(jìn)行圖像分類2022-09-09淺談python中scipy.misc.logsumexp函數(shù)的運(yùn)用場(chǎng)景
下面小編就為大家?guī)?lái)一篇淺談python中scipy.misc.logsumexp函數(shù)的運(yùn)用場(chǎng)景。小編覺(jué)得挺不錯(cuò)的,現(xiàn)在就分享給大家,也給大家做個(gè)參考。一起跟隨小編過(guò)來(lái)看看吧2016-06-06Numpy中如何創(chuàng)建矩陣并等間隔抽取數(shù)據(jù)
這篇文章主要介紹了Numpy中如何創(chuàng)建矩陣并等間隔抽取數(shù)據(jù)問(wèn)題,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。如有錯(cuò)誤或未考慮完全的地方,望不吝賜教2023-05-05Python開(kāi)啟線程,在函數(shù)中開(kāi)線程的實(shí)例
今天小編就為大家分享一篇Python開(kāi)啟線程,在函數(shù)中開(kāi)線程的實(shí)例,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過(guò)來(lái)看看吧2019-02-02Pytest+Request+Allure+Jenkins實(shí)現(xiàn)接口自動(dòng)化
這篇文章介紹了Pytest+Request+Allure+Jenkins實(shí)現(xiàn)接口自動(dòng)化的方法,文中通過(guò)示例代碼介紹的非常詳細(xì)。對(duì)大家的學(xué)習(xí)或工作具有一定的參考借鑒價(jià)值,需要的朋友可以參考下2022-06-06Python 批量合并多個(gè)txt文件的實(shí)例講解
今天小編就為大家分享一篇Python 批量合并多個(gè)txt文件的實(shí)例講解,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過(guò)來(lái)看看吧2018-05-05