Pytorch實(shí)現(xiàn)全連接層的操作
全連接神經(jīng)網(wǎng)絡(luò)(FC)
全連接神經(jīng)網(wǎng)絡(luò)是一種最基本的神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu),英文為Full Connection,所以一般簡(jiǎn)稱FC。
FC的準(zhǔn)則很簡(jiǎn)單:神經(jīng)網(wǎng)絡(luò)中除輸入層之外的每個(gè)節(jié)點(diǎn)都和上一層的所有節(jié)點(diǎn)有連接。
以上一次的MNIST為例
import torch
import torch.utils.data
from torch import optim
from torchvision import datasets
from torchvision.transforms import transforms
import torch.nn.functional as F
batch_size = 200
learning_rate = 0.001
epochs = 20
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('mnistdata', train=True, download=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('mnistdata', train=False, download=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=batch_size, shuffle=True)
w1, b1 = torch.randn(200, 784, requires_grad=True), torch.zeros(200, requires_grad=True)
w2, b2 = torch.randn(200, 200, requires_grad=True), torch.zeros(200, requires_grad=True)
w3, b3 = torch.randn(10, 200, requires_grad=True), torch.zeros(10, requires_grad=True)
torch.nn.init.kaiming_normal_(w1)
torch.nn.init.kaiming_normal_(w2)
torch.nn.init.kaiming_normal_(w3)
def forward(x):
x = x@w1.t() + b1
x = F.relu(x)
x = x@w2.t() + b2
x = F.relu(x)
x = x@w3.t() + b3
x = F.relu(x)
return x
optimizer = optim.Adam([w1, b1, w2, b2, w3, b3], lr=learning_rate)
criteon = torch.nn.CrossEntropyLoss()
for epoch in range(epochs):
for batch_idx, (data, target) in enumerate(train_loader):
data = data.view(-1, 28*28)
logits = forward(data)
loss = criteon(logits, target)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch_idx % 100 == 0:
print('Train Epoch : {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx*len(data), len(train_loader.dataset),
100.*batch_idx/len(train_loader), loss.item()
))
test_loss = 0
correct = 0
for data, target in test_loader:
data = data.view(-1, 28*28)
logits = forward(data)
test_loss += criteon(logits, target).item()
pred = logits.data.max(1)[1]
correct += pred.eq(target.data).sum()
test_loss /= len(test_loader.dataset)
print('\nTest set : Averge loss: {:.4f}, Accurancy: {}/{}({:.3f}%)'.format(
test_loss, correct, len(test_loader.dataset),
100.*correct/len(test_loader.dataset)
))
我們將每個(gè)w和b都進(jìn)行了定義,并且自己寫了一個(gè)forward函數(shù)。如果我們采用了全連接層,那么整個(gè)代碼也會(huì)更加簡(jiǎn)介明了。
首先,我們定義自己的網(wǎng)絡(luò)結(jié)構(gòu)的類:
class MLP(nn.Module):
def __init__(self):
super(MLP, self).__init__()
self.model = nn.Sequential(
nn.Linear(784, 200),
nn.LeakyReLU(inplace=True),
nn.Linear(200, 200),
nn.LeakyReLU(inplace=True),
nn.Linear(200, 10),
nn.LeakyReLU(inplace=True)
)
def forward(self, x):
x = self.model(x)
return x
它繼承于nn.Moudle,并且自己定義里整個(gè)網(wǎng)絡(luò)結(jié)構(gòu)。
其中inplace的作用是直接復(fù)用存儲(chǔ)空間,減少新開(kāi)辟存儲(chǔ)空間。
除此之外,它可以直接進(jìn)行運(yùn)算,不需要手動(dòng)定義參數(shù)和寫出運(yùn)算語(yǔ)句,更加簡(jiǎn)便。
同時(shí)我們還可以發(fā)現(xiàn),它自動(dòng)完成了初試化,不需要像之前一樣再手動(dòng)寫一個(gè)初始化了。
區(qū)分nn.Relu和F.relu()
前者是一個(gè)類的接口,后者是一個(gè)函數(shù)式接口。
前者都是大寫的,并且調(diào)用的的時(shí)候需要先實(shí)例化才能使用,而后者是小寫的可以直接使用。
最重要的是后者的自由度更高,更適合做一些自己定義的操作。
完整代碼
import torch
import torch.utils.data
from torch import optim, nn
from torchvision import datasets
from torchvision.transforms import transforms
import torch.nn.functional as F
batch_size = 200
learning_rate = 0.001
epochs = 20
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('mnistdata', train=True, download=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('mnistdata', train=False, download=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=batch_size, shuffle=True)
class MLP(nn.Module):
def __init__(self):
super(MLP, self).__init__()
self.model = nn.Sequential(
nn.Linear(784, 200),
nn.LeakyReLU(inplace=True),
nn.Linear(200, 200),
nn.LeakyReLU(inplace=True),
nn.Linear(200, 10),
nn.LeakyReLU(inplace=True)
)
def forward(self, x):
x = self.model(x)
return x
device = torch.device('cuda:0')
net = MLP().to(device)
optimizer = optim.Adam(net.parameters(), lr=learning_rate)
criteon = nn.CrossEntropyLoss().to(device)
for epoch in range(epochs):
for batch_idx, (data, target) in enumerate(train_loader):
data = data.view(-1, 28*28)
data, target = data.to(device), target.to(device)
logits = net(data)
loss = criteon(logits, target)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch_idx % 100 == 0:
print('Train Epoch : {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx*len(data), len(train_loader.dataset),
100.*batch_idx/len(train_loader), loss.item()
))
test_loss = 0
correct = 0
for data, target in test_loader:
data = data.view(-1, 28*28)
data, target = data.to(device), target.to(device)
logits = net(data)
test_loss += criteon(logits, target).item()
pred = logits.data.max(1)[1]
correct += pred.eq(target.data).sum()
test_loss /= len(test_loader.dataset)
print('\nTest set : Averge loss: {:.4f}, Accurancy: {}/{}({:.3f}%)'.format(
test_loss, correct, len(test_loader.dataset),
100.*correct/len(test_loader.dataset)
))
補(bǔ)充:pytorch 實(shí)現(xiàn)一個(gè)隱層的全連接神經(jīng)網(wǎng)絡(luò)
torch.nn 實(shí)現(xiàn) 模型的定義,網(wǎng)絡(luò)層的定義,損失函數(shù)的定義。
import torch
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random Tensors to hold inputs and outputs
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
# Use the nn package to define our model as a sequence of layers. nn.Sequential
# is a Module which contains other Modules, and applies them in sequence to
# produce its output. Each Linear Module computes output from input using a
# linear function, and holds internal Tensors for its weight and bias.
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.ReLU(),
torch.nn.Linear(H, D_out),
)
# The nn package also contains definitions of popular loss functions; in this
# case we will use Mean Squared Error (MSE) as our loss function.
loss_fn = torch.nn.MSELoss(reduction='sum')
learning_rate = 1e-4
for t in range(500):
# Forward pass: compute predicted y by passing x to the model. Module objects
# override the __call__ operator so you can call them like functions. When
# doing so you pass a Tensor of input data to the Module and it produces
# a Tensor of output data.
y_pred = model(x)
# Compute and print loss. We pass Tensors containing the predicted and true
# values of y, and the loss function returns a Tensor containing the
# loss.
loss = loss_fn(y_pred, y)
print(t, loss.item())
# Zero the gradients before running the backward pass.
model.zero_grad()
# Backward pass: compute gradient of the loss with respect to all the learnable
# parameters of the model. Internally, the parameters of each Module are stored
# in Tensors with requires_grad=True, so this call will compute gradients for
# all learnable parameters in the model.
loss.backward()
# Update the weights using gradient descent. Each parameter is a Tensor, so
# we can access its gradients like we did before.
with torch.no_grad():
for param in model.parameters():
param -= learning_rate * param.grad
上面,我們使用parem= -= learning_rate* param.grad 手動(dòng)更新參數(shù)。
使用torch.optim 自動(dòng)優(yōu)化參數(shù)。optim這個(gè)package提供了各種不同的模型優(yōu)化方法,包括SGD+momentum, RMSProp, Adam等等。
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
for t in range(500):
y_pred = model(x)
loss = loss_fn(y_pred, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
以上為個(gè)人經(jīng)驗(yàn),希望能給大家一個(gè)參考,也希望大家多多支持腳本之家。如有錯(cuò)誤或未考慮完全的地方,望不吝賜教。
- pytorch_detach 切斷網(wǎng)絡(luò)反傳方式
- pytorch 禁止/允許計(jì)算局部梯度的操作
- 如何利用Pytorch計(jì)算三角函數(shù)
- 聊聊PyTorch中eval和no_grad的關(guān)系
- Pytorch實(shí)現(xiàn)圖像識(shí)別之?dāng)?shù)字識(shí)別(附詳細(xì)注釋)
- pytorch 優(yōu)化器(optim)不同參數(shù)組,不同學(xué)習(xí)率設(shè)置的操作
- PyTorch 如何將CIFAR100數(shù)據(jù)按類標(biāo)歸類保存
- PyTorch的Debug指南
- Python深度學(xué)習(xí)之使用Pytorch搭建ShuffleNetv2
- win10系統(tǒng)配置GPU版本Pytorch的詳細(xì)教程
- 淺談pytorch中的nn.Sequential(*net[3: 5])是啥意思
- pytorch visdom安裝開(kāi)啟及使用方法
- PyTorch CUDA環(huán)境配置及安裝的步驟(圖文教程)
- pytorch中的nn.ZeroPad2d()零填充函數(shù)實(shí)例詳解
- 使用pytorch實(shí)現(xiàn)線性回歸
- pytorch實(shí)現(xiàn)線性回歸以及多元回歸
- PyTorch學(xué)習(xí)之軟件準(zhǔn)備與基本操作總結(jié)
相關(guān)文章
Python 項(xiàng)目轉(zhuǎn)化為so文件實(shí)例
今天小編就為大家分享一篇Python 項(xiàng)目轉(zhuǎn)化為so文件實(shí)例,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過(guò)來(lái)看看吧2019-12-12
python將ip地址轉(zhuǎn)換成整數(shù)的方法
這篇文章主要介紹了python將ip地址轉(zhuǎn)換成整數(shù)的方法,涉及Python針對(duì)IP地址的轉(zhuǎn)換技巧,需要的朋友可以參考下2015-03-03
Python實(shí)現(xiàn)對(duì)比兩個(gè)Excel數(shù)據(jù)內(nèi)容并標(biāo)記出不同
日常工作中需要對(duì)比兩個(gè)Excel工作表中的數(shù)據(jù)差異是很不方便的,使用python來(lái)做就比較簡(jiǎn)單了!本文為大家介紹了python實(shí)現(xiàn)對(duì)比兩個(gè)Excel的數(shù)據(jù)內(nèi)容并標(biāo)記出不同數(shù)據(jù)的示例代碼,需要的可以參考一下2022-12-12
基于Pycharm加載多個(gè)項(xiàng)目過(guò)程圖解
這篇文章主要介紹了基于Pycharm加載多個(gè)項(xiàng)目過(guò)程圖解,文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友可以參考下2020-01-01
python實(shí)現(xiàn)遠(yuǎn)程控制電腦
這篇文章主要為大家詳細(xì)介紹了python實(shí)現(xiàn)遠(yuǎn)程控制電腦,具有一定的參考價(jià)值,感興趣的小伙伴們可以參考一下2019-05-05
TensorFlow人工智能學(xué)習(xí)創(chuàng)建數(shù)據(jù)實(shí)現(xiàn)示例詳解
這篇文章主要為大家介紹了TensorFlow人工智能學(xué)習(xí)創(chuàng)建數(shù)據(jù)實(shí)現(xiàn)示例詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步2021-11-11
詳解python中Numpy的屬性與創(chuàng)建矩陣
這篇文章給大家分享了關(guān)于python中Numpy的屬性與創(chuàng)建矩陣的相關(guān)知識(shí)點(diǎn)內(nèi)容,有興趣的朋友們可以學(xué)習(xí)參考下。2018-09-09

