pytorch之torch.flatten()和torch.nn.Flatten()的用法
torch.flatten()和torch.nn.Flatten()的用法
flatten()函數(shù)的作用是將tensor鋪平成一維
torch.flatten(input, start_dim=0, end_dim=- 1) → Tensor
input (Tensor)
– the input tensor.start_dim (int)
– the first dim to flattenend_dim (int)
– the last dim to flatten
start_dim和end_dim構(gòu)成了整個(gè)你要選擇鋪平的維度范圍
下面舉例說明
x = torch.tensor([[1,2], [3,4], [5,6]]) x = x.flatten(0) x ------------------------ tensor([1, 2, 3, 4, 5, 6])
對(duì)于圖片數(shù)據(jù),我們往往期望進(jìn)入fc層的維度為(channels, N)這樣
x = torch.tensor([[[1,2],[3,4]], [[5,6],[7,8]]]) x = x.flatten(1) x ------------------------- tensor([[1, 2], [3, 4], [5, 6]])
注:
torch.nn.Flatten(start_dim=1, end_dim=- 1)
start_dim 默認(rèn)為 1
所以在構(gòu)建網(wǎng)絡(luò)時(shí),下面兩種是等價(jià)的
class Classifier(nn.Module): def __init__(self): super(Classifier, self).__init__() # The arguments for commonly used modules: # torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0) # torch.nn.MaxPool2d(kernel_size, stride=None, padding=0) # input image size: [3, 128, 128] self.cnn_layers = nn.Sequential( nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(64), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2, padding=0), nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(128), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2, padding=0), nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(256), nn.ReLU(), nn.MaxPool2d(kernel_size=4, stride=4, padding=0), ) self.fc_layers = nn.Sequential( nn.Linear(256 * 8 * 8, 256), nn.ReLU(), nn.Linear(256, 256), nn.ReLU(), nn.Linear(256, 11) ) def forward(self, x): # input (x): [batch_size, 3, 128, 128] # output: [batch_size, 11] # Extract features by convolutional layers. x = self.cnn_layers(x) # The extracted feature map must be flatten before going to fully-connected layers. x = x.flatten(1) # The features are transformed by fully-connected layers to obtain the final logits. x = self.fc_layers(x) return x
class Classifier(nn.Module): def __init__(self): super(Classifier, self).__init__() self.layers = nn.Sequential( nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(64), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2, padding=0), nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(128), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2, padding=0), nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(256), nn.ReLU(), nn.MaxPool2d(kernel_size=4, stride=4, padding=0), nn.Flatten(), nn.Linear(256 * 8 * 8, 256), nn.ReLU(), nn.Linear(256, 256), nn.ReLU(), nn.Linear(256, 11) ) def forward(self, x): x = self.layers(x) return x
總結(jié)
以上為個(gè)人經(jīng)驗(yàn),希望能給大家一個(gè)參考,也希望大家多多支持腳本之家。
相關(guān)文章
Python導(dǎo)入Excel表格數(shù)據(jù)并以字典dict格式保存的操作方法
本文介紹基于Python語言,將一個(gè)Excel表格文件中的數(shù)據(jù)導(dǎo)入到Python中,并將其通過字典格式來存儲(chǔ)的方法,感興趣的朋友一起看看吧2023-01-01python網(wǎng)絡(luò)爬蟲采集聯(lián)想詞示例
這篇文章主要介紹了python網(wǎng)絡(luò)爬蟲采集聯(lián)想詞示例,需要的朋友可以參考下2014-02-02Keras實(shí)現(xiàn)將兩個(gè)模型連接到一起
這篇文章主要介紹了Keras實(shí)現(xiàn)將兩個(gè)模型連接到一起,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過來看看吧2020-05-05openCV-Python筆記之解讀圖像的讀取、顯示和保存問題
這篇文章主要介紹了openCV-Python筆記之解讀圖像的讀取、顯示和保存問題,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。如有錯(cuò)誤或未考慮完全的地方,望不吝賜教2022-12-12詳解Python數(shù)據(jù)結(jié)構(gòu)與算法中的順序表
線性表在計(jì)算機(jī)中的表示可以采用多種方法,采用不同存儲(chǔ)方法的線性表也有著不同的名稱和特點(diǎn)。線性表有兩種基本的存儲(chǔ)結(jié)構(gòu):順序存儲(chǔ)結(jié)構(gòu)和鏈?zhǔn)酱鎯?chǔ)結(jié)構(gòu)。本文將介紹順序存儲(chǔ)結(jié)構(gòu)的特點(diǎn)以及各種基本運(yùn)算的實(shí)現(xiàn)。需要的可以參考一下2022-01-01