欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

Pytorch實(shí)現(xiàn)全連接層的操作

 更新時間:2021年05月10日 09:37:20   作者:陳千鶴  
這篇文章主要介紹了pytorch實(shí)現(xiàn)簡單全連接層的操作,具有很好的參考價(jià)值,希望對大家有所幫助。一起跟隨小編過來看看吧

全連接神經(jīng)網(wǎng)絡(luò)(FC)

全連接神經(jīng)網(wǎng)絡(luò)是一種最基本的神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu),英文為Full Connection,所以一般簡稱FC。

FC的準(zhǔn)則很簡單:神經(jīng)網(wǎng)絡(luò)中除輸入層之外的每個節(jié)點(diǎn)都和上一層的所有節(jié)點(diǎn)有連接。

以上一次的MNIST為例

import torch
import torch.utils.data
from torch import optim
from torchvision import datasets
from torchvision.transforms import transforms
import torch.nn.functional as F
batch_size = 200
learning_rate = 0.001
epochs = 20
train_loader = torch.utils.data.DataLoader(
    datasets.MNIST('mnistdata', train=True, download=False,
                   transform=transforms.Compose([
                       transforms.ToTensor(),
                       transforms.Normalize((0.1307,), (0.3081,))
                   ])),
    batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(
    datasets.MNIST('mnistdata', train=False, download=False,
                   transform=transforms.Compose([
                       transforms.ToTensor(),
                       transforms.Normalize((0.1307,), (0.3081,))
                   ])),
    batch_size=batch_size, shuffle=True)
w1, b1 = torch.randn(200, 784, requires_grad=True), torch.zeros(200, requires_grad=True)
w2, b2 = torch.randn(200, 200, requires_grad=True), torch.zeros(200, requires_grad=True)
w3, b3 = torch.randn(10, 200, requires_grad=True), torch.zeros(10, requires_grad=True)
torch.nn.init.kaiming_normal_(w1)
torch.nn.init.kaiming_normal_(w2)
torch.nn.init.kaiming_normal_(w3)
def forward(x):
    x = x@w1.t() + b1
    x = F.relu(x)
    x = x@w2.t() + b2
    x = F.relu(x)
    x = x@w3.t() + b3
    x = F.relu(x)
    return x
optimizer = optim.Adam([w1, b1, w2, b2, w3, b3], lr=learning_rate)
criteon = torch.nn.CrossEntropyLoss()
for epoch in range(epochs):
    for batch_idx, (data, target) in enumerate(train_loader):
        data = data.view(-1, 28*28)
        logits = forward(data)
        loss = criteon(logits, target)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        if batch_idx % 100 == 0:
            print('Train Epoch : {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
                epoch, batch_idx*len(data), len(train_loader.dataset),
                100.*batch_idx/len(train_loader), loss.item()
            ))
    test_loss = 0
    correct = 0
    for data, target in test_loader:
        data = data.view(-1, 28*28)
        logits = forward(data)
        test_loss += criteon(logits, target).item()
        pred = logits.data.max(1)[1]
        correct += pred.eq(target.data).sum()
    test_loss /= len(test_loader.dataset)
    print('\nTest set : Averge loss: {:.4f}, Accurancy: {}/{}({:.3f}%)'.format(
        test_loss, correct, len(test_loader.dataset),
        100.*correct/len(test_loader.dataset)
        ))

我們將每個w和b都進(jìn)行了定義,并且自己寫了一個forward函數(shù)。如果我們采用了全連接層,那么整個代碼也會更加簡介明了。

首先,我們定義自己的網(wǎng)絡(luò)結(jié)構(gòu)的類:

class MLP(nn.Module):
    def __init__(self):
        super(MLP, self).__init__()
        self.model = nn.Sequential(
            nn.Linear(784, 200),
            nn.LeakyReLU(inplace=True),
            nn.Linear(200, 200),
            nn.LeakyReLU(inplace=True),
            nn.Linear(200, 10),
            nn.LeakyReLU(inplace=True)
        )
    def forward(self, x):
        x = self.model(x)
        return x

它繼承于nn.Moudle,并且自己定義里整個網(wǎng)絡(luò)結(jié)構(gòu)。

其中inplace的作用是直接復(fù)用存儲空間,減少新開辟存儲空間。

除此之外,它可以直接進(jìn)行運(yùn)算,不需要手動定義參數(shù)和寫出運(yùn)算語句,更加簡便。

同時我們還可以發(fā)現(xiàn),它自動完成了初試化,不需要像之前一樣再手動寫一個初始化了。

區(qū)分nn.Relu和F.relu()

前者是一個類的接口,后者是一個函數(shù)式接口。

前者都是大寫的,并且調(diào)用的的時候需要先實(shí)例化才能使用,而后者是小寫的可以直接使用。

最重要的是后者的自由度更高,更適合做一些自己定義的操作。

完整代碼

import torch
import torch.utils.data
from torch import optim, nn
from torchvision import datasets
from torchvision.transforms import transforms
import torch.nn.functional as F
batch_size = 200
learning_rate = 0.001
epochs = 20
train_loader = torch.utils.data.DataLoader(
    datasets.MNIST('mnistdata', train=True, download=False,
                   transform=transforms.Compose([
                       transforms.ToTensor(),
                       transforms.Normalize((0.1307,), (0.3081,))
                   ])),
    batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(
    datasets.MNIST('mnistdata', train=False, download=False,
                   transform=transforms.Compose([
                       transforms.ToTensor(),
                       transforms.Normalize((0.1307,), (0.3081,))
                   ])),
    batch_size=batch_size, shuffle=True)
class MLP(nn.Module):
    def __init__(self):
        super(MLP, self).__init__()
        self.model = nn.Sequential(
            nn.Linear(784, 200),
            nn.LeakyReLU(inplace=True),
            nn.Linear(200, 200),
            nn.LeakyReLU(inplace=True),
            nn.Linear(200, 10),
            nn.LeakyReLU(inplace=True)
        )
    def forward(self, x):
        x = self.model(x)
        return x
device = torch.device('cuda:0')
net = MLP().to(device)
optimizer = optim.Adam(net.parameters(), lr=learning_rate)
criteon = nn.CrossEntropyLoss().to(device)
for epoch in range(epochs):
    for batch_idx, (data, target) in enumerate(train_loader):
        data = data.view(-1, 28*28)
        data, target = data.to(device), target.to(device)
        logits = net(data)
        loss = criteon(logits, target)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        if batch_idx % 100 == 0:
            print('Train Epoch : {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
                epoch, batch_idx*len(data), len(train_loader.dataset),
                100.*batch_idx/len(train_loader), loss.item()
            ))
    test_loss = 0
    correct = 0
    for data, target in test_loader:
        data = data.view(-1, 28*28)
        data, target = data.to(device), target.to(device)
        logits = net(data)
        test_loss += criteon(logits, target).item()
        pred = logits.data.max(1)[1]
        correct += pred.eq(target.data).sum()
    test_loss /= len(test_loader.dataset)
    print('\nTest set : Averge loss: {:.4f}, Accurancy: {}/{}({:.3f}%)'.format(
        test_loss, correct, len(test_loader.dataset),
        100.*correct/len(test_loader.dataset)
        ))

補(bǔ)充:pytorch 實(shí)現(xiàn)一個隱層的全連接神經(jīng)網(wǎng)絡(luò)

torch.nn 實(shí)現(xiàn) 模型的定義,網(wǎng)絡(luò)層的定義,損失函數(shù)的定義。

import torch
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random Tensors to hold inputs and outputs
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
# Use the nn package to define our model as a sequence of layers. nn.Sequential
# is a Module which contains other Modules, and applies them in sequence to
# produce its output. Each Linear Module computes output from input using a
# linear function, and holds internal Tensors for its weight and bias.
model = torch.nn.Sequential(
    torch.nn.Linear(D_in, H),
    torch.nn.ReLU(),
    torch.nn.Linear(H, D_out),
)
# The nn package also contains definitions of popular loss functions; in this
# case we will use Mean Squared Error (MSE) as our loss function.
loss_fn = torch.nn.MSELoss(reduction='sum')
learning_rate = 1e-4
for t in range(500):
    # Forward pass: compute predicted y by passing x to the model. Module objects
    # override the __call__ operator so you can call them like functions. When
    # doing so you pass a Tensor of input data to the Module and it produces
    # a Tensor of output data.
    y_pred = model(x)
    # Compute and print loss. We pass Tensors containing the predicted and true
    # values of y, and the loss function returns a Tensor containing the
    # loss.
    loss = loss_fn(y_pred, y)
    print(t, loss.item())
    # Zero the gradients before running the backward pass.
    model.zero_grad()
    # Backward pass: compute gradient of the loss with respect to all the learnable
    # parameters of the model. Internally, the parameters of each Module are stored
    # in Tensors with requires_grad=True, so this call will compute gradients for
    # all learnable parameters in the model.
    loss.backward()
    # Update the weights using gradient descent. Each parameter is a Tensor, so
    # we can access its gradients like we did before.
    with torch.no_grad():
        for param in model.parameters():
            param -= learning_rate * param.grad

上面,我們使用parem= -= learning_rate* param.grad 手動更新參數(shù)。

使用torch.optim 自動優(yōu)化參數(shù)。optim這個package提供了各種不同的模型優(yōu)化方法,包括SGD+momentum, RMSProp, Adam等等。

optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
for t in range(500):
    y_pred = model(x)
    loss = loss_fn(y_pred, y)
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

以上為個人經(jīng)驗(yàn),希望能給大家一個參考,也希望大家多多支持腳本之家。如有錯誤或未考慮完全的地方,望不吝賜教。

相關(guān)文章

  • Python 項(xiàng)目轉(zhuǎn)化為so文件實(shí)例

    Python 項(xiàng)目轉(zhuǎn)化為so文件實(shí)例

    今天小編就為大家分享一篇Python 項(xiàng)目轉(zhuǎn)化為so文件實(shí)例,具有很好的參考價(jià)值,希望對大家有所幫助。一起跟隨小編過來看看吧
    2019-12-12
  • python將ip地址轉(zhuǎn)換成整數(shù)的方法

    python將ip地址轉(zhuǎn)換成整數(shù)的方法

    這篇文章主要介紹了python將ip地址轉(zhuǎn)換成整數(shù)的方法,涉及Python針對IP地址的轉(zhuǎn)換技巧,需要的朋友可以參考下
    2015-03-03
  • Python實(shí)現(xiàn)對比兩個Excel數(shù)據(jù)內(nèi)容并標(biāo)記出不同

    Python實(shí)現(xiàn)對比兩個Excel數(shù)據(jù)內(nèi)容并標(biāo)記出不同

    日常工作中需要對比兩個Excel工作表中的數(shù)據(jù)差異是很不方便的,使用python來做就比較簡單了!本文為大家介紹了python實(shí)現(xiàn)對比兩個Excel的數(shù)據(jù)內(nèi)容并標(biāo)記出不同數(shù)據(jù)的示例代碼,需要的可以參考一下
    2022-12-12
  • python 抓包保存為pcap文件并解析的實(shí)例

    python 抓包保存為pcap文件并解析的實(shí)例

    今天小編就為大家分享一篇python 抓包保存為pcap文件并解析的實(shí)例,具有很好的參考價(jià)值,希望對大家有所幫助。一起跟隨小編過來看看吧
    2019-07-07
  • Python之os模塊案例詳解

    Python之os模塊案例詳解

    這篇文章主要介紹了Python之os模塊案例詳解,本篇文章通過簡要的案例,講解了該項(xiàng)技術(shù)的了解與使用,以下就是詳細(xì)內(nèi)容,需要的朋友可以參考下
    2021-09-09
  • 利用Python破解摩斯密碼

    利用Python破解摩斯密碼

    摩爾斯電碼( 又譯為摩斯密碼,英語:Morse code)是一種時通時斷的信號代碼,通過不同的排列順序來表達(dá)不同的英文字母、數(shù)字和標(biāo)點(diǎn)符號。本文將通過Python代碼來實(shí)現(xiàn)破解摩斯密碼,感興趣的可以學(xué)習(xí)一下
    2022-02-02
  • 基于Pycharm加載多個項(xiàng)目過程圖解

    基于Pycharm加載多個項(xiàng)目過程圖解

    這篇文章主要介紹了基于Pycharm加載多個項(xiàng)目過程圖解,文中通過示例代碼介紹的非常詳細(xì),對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友可以參考下
    2020-01-01
  • python實(shí)現(xiàn)遠(yuǎn)程控制電腦

    python實(shí)現(xiàn)遠(yuǎn)程控制電腦

    這篇文章主要為大家詳細(xì)介紹了python實(shí)現(xiàn)遠(yuǎn)程控制電腦,具有一定的參考價(jià)值,感興趣的小伙伴們可以參考一下
    2019-05-05
  • TensorFlow人工智能學(xué)習(xí)創(chuàng)建數(shù)據(jù)實(shí)現(xiàn)示例詳解

    TensorFlow人工智能學(xué)習(xí)創(chuàng)建數(shù)據(jù)實(shí)現(xiàn)示例詳解

    這篇文章主要為大家介紹了TensorFlow人工智能學(xué)習(xí)創(chuàng)建數(shù)據(jù)實(shí)現(xiàn)示例詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步
    2021-11-11
  • 詳解python中Numpy的屬性與創(chuàng)建矩陣

    詳解python中Numpy的屬性與創(chuàng)建矩陣

    這篇文章給大家分享了關(guān)于python中Numpy的屬性與創(chuàng)建矩陣的相關(guān)知識點(diǎn)內(nèi)容,有興趣的朋友們可以學(xué)習(xí)參考下。
    2018-09-09

最新評論