pytorch實現(xiàn)mnist手寫彩色數(shù)字識別
前言
環(huán)境:
- 語言環(huán)境:Python3.6
- 編譯器:jupyter lab
- 深度學習環(huán)境:pytorch1.10
要求:
- 學習如何編寫一個完整的深度學習程序(?)
- 手動推導卷積層與池化層的計算過程(?)
一 前期工作
環(huán)境:python3.6,1080ti,pytorch1.10(實驗室服務器的環(huán)境)
1.設置GPU或者cpu
import torch import torch.nn as nn import matplotlib.pyplot as plt import torchvision device = torch.device("cuda" if torch.cuda.is_available() else "cpu") device
2.導入數(shù)據(jù)
train_ds = torchvision.datasets.MNIST('data', train=True, transform=torchvision.transforms.ToTensor(), # 將數(shù)據(jù)類型轉(zhuǎn)化為Tensor download=True) test_ds = torchvision.datasets.MNIST('data', train=False, transform=torchvision.transforms.ToTensor(), # 將數(shù)據(jù)類型轉(zhuǎn)化為Tensor download=True)
二 數(shù)據(jù)預處理
1.加載數(shù)據(jù)
設置數(shù)據(jù)尺寸
batch_size = 32
設置dataset
train_dl = torch.utils.data.DataLoader(train_ds, batch_size=batch_size, shuffle=True) test_dl = torch.utils.data.DataLoader(test_ds, batch_size=batch_size)
2.可視化數(shù)據(jù)
打印部分圖片:
import numpy as np # 指定圖片大小,圖像大小為20寬、5高的繪圖(單位為英寸inch) plt.figure(figsize=(20, 5)) for i, imgs in enumerate(imgs[:20]): # 維度縮減 npimg = imgs.numpy().transpose((1, 2, 0)) # 將整個figure分成2行10列,繪制第i+1個子圖。 plt.subplot(2, 10, i+1) plt.imshow(npimg, cmap=plt.cm.binary) plt.axis('off')
3.再次檢查數(shù)據(jù)
輸出數(shù)據(jù)的尺寸:
# 取一個批次查看數(shù)據(jù)格式 # 數(shù)據(jù)的shape為:[batch_size, channel, height, weight] # 其中batch_size為自己設定,channel,height和weight分別是圖片的通道數(shù),高度和寬度。 imgs, labels = next(iter(train_dl)) imgs.shape
三 搭建網(wǎng)絡
import torch from torch import nn from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential,ReLU num_classes = 10 class Model(nn.Module): def __init__(self): super(Model,self).__init__() # 卷積層 self.layers = Sequential( # 第一層 Conv2d(3, 64, kernel_size=3), MaxPool2d(2), ReLU(), # 第二層 Conv2d(64, 64, kernel_size=3), MaxPool2d(2), ReLU(), Conv2d(64, 128, kernel_size=3), MaxPool2d(2), ReLU(), Flatten(), Linear(512, 256,bias=True), ReLU(), Linear(256, 64,bias=True), ReLU(), Linear(64, num_classes,bias=True) ) def forward(self, x): x = self.layers(x) return x
打印網(wǎng)絡結構:
vgg16網(wǎng)絡搭建:未修改尺寸
from torch import nn vgg16=torchvision.models.vgg16(pretrained=True)#經(jīng)過訓練的 class Model(nn.Module): def __init__(self): super(Model,self).__init__() # 卷積層 self.layers = Sequential( vgg16 ) def forward(self, x): x = self.layers(x) return x
vgg16網(wǎng)絡搭建:修改尺寸
四 訓練模型
1.設置學習率
loss_fn = nn.CrossEntropyLoss() # 創(chuàng)建損失函數(shù) learn_rate = 1e-2 # 學習率 opt = torch.optim.SGD(model.parameters(),lr=learn_rate)
2.模型訓練
訓練函數(shù):
# 訓練循環(huán) def train(dataloader, model, loss_fn, optimizer): size = len(dataloader.dataset) # 訓練集的大小,一共60000張圖片 num_batches = len(dataloader) # 批次數(shù)目,1875(60000/32) train_loss, train_acc = 0, 0 # 初始化訓練損失和正確率 for X, y in dataloader: # 獲取圖片及其標簽 X, y = X.to(device), y.to(device) # 計算預測誤差 pred = model(X) # 網(wǎng)絡輸出 loss = loss_fn(pred, y) # 計算網(wǎng)絡輸出和真實值之間的差距,targets為真實值,計算二者差值即為損失 # 反向傳播 optimizer.zero_grad() # grad屬性歸零 loss.backward() # 反向傳播 optimizer.step() # 每一步自動更新 # 記錄acc與loss train_acc += (pred.argmax(1) == y).type(torch.float).sum().item() train_loss += loss.item() train_acc /= size train_loss /= num_batches return train_acc, train_loss
測試函數(shù) :
def test (dataloader, model, loss_fn): size = len(dataloader.dataset) # 測試集的大小,一共10000張圖片 num_batches = len(dataloader) # 批次數(shù)目,313(10000/32=312.5,向上取整) test_loss, test_acc = 0, 0 # 當不進行訓練時,停止梯度更新,節(jié)省計算內(nèi)存消耗 with torch.no_grad(): for imgs, target in dataloader: imgs, target = imgs.to(device), target.to(device) # 計算loss target_pred = model(imgs) loss = loss_fn(target_pred, target) test_loss += loss.item() test_acc += (target_pred.argmax(1) == target).type(torch.float).sum().item() test_acc /= size test_loss /= num_batches return test_acc, test_loss
具體訓練代碼 :
epochs = 20 train_loss = [] train_acc = [] test_loss = [] test_acc = [] for epoch in range(epochs): model.train() epoch_train_acc, epoch_train_loss = train(train_dl, model, loss_fn, opt) model.eval() epoch_test_acc, epoch_test_loss = test(test_dl, model, loss_fn) train_acc.append(epoch_train_acc) train_loss.append(epoch_train_loss) test_acc.append(epoch_test_acc) test_loss.append(epoch_test_loss) template = ('Epoch:{:2d}, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%,Test_loss:{:.3f}') print(template.format(epoch+1, epoch_train_acc*100, epoch_train_loss, epoch_test_acc*100, epoch_test_loss)) print('Done')
五 模型評估
1.Loss和Accuracy圖
2.總結
- 1.本文與上篇文章區(qū)別在于灰色圖像和彩色圖像通道數(shù)一個為1,一個為3.所以這里的卷積輸入都是3.
- 2.關于各層計算這里簡單說一下,我們以范文舉例:
卷積層:32->30因為((32-3)/1)+1=30
池化池:30->15因為30÷2=15
具體計算可以參考我題目開頭的文章,這里不在贅述
我們可以看到本次訓練效果不好,那我們可以利用經(jīng)典網(wǎng)絡vgg16進行修改,準確率提高到了百分之88了。
其代碼如上:
到此這篇關于pytorch-實現(xiàn)mnist手寫彩色數(shù)字識別的文章就介紹到這了,更多相關pytorch mnist內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關文章希望大家以后多多支持腳本之家!
相關文章
Pycharm報錯Environment?location?directory?is?not?empty問題及
這篇文章主要介紹了Pycharm報錯Environment?location?directory?is?not?empty問題及解決,具有很好的參考價值,希望對大家有所幫助,如有錯誤或未考慮完全的地方,望不吝賜教2024-09-09python爬取豆瓣電影排行榜(requests)的示例代碼
這篇文章主要介紹了python爬取豆瓣電影排行榜(requests),本文給大家介紹的非常詳細,對大家的學習或工作具有一定的參考借鑒價值,需要的朋友可以參考下2021-02-02