欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

pytorch實(shí)現(xiàn)好萊塢明星識(shí)別的示例代碼

 更新時(shí)間:2023年01月03日 09:19:32   作者:老師我作業(yè)忘帶了  
本文主要介紹了pytorch實(shí)現(xiàn)好萊塢明星識(shí)別,文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧

一、前期準(zhǔn)備

1.設(shè)置GPU

import torch
from torch import nn
import torchvision
from torchvision import transforms,datasets,models
import matplotlib.pyplot as plt
import os,PIL,pathlib
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device
device(type='cuda')

2.導(dǎo)入數(shù)據(jù)

data_dir = './hlw/'
data_dir = pathlib.Path(data_dir)
 
data_paths = list(data_dir.glob('*'))
classNames = [str(path).split('\\')[1] for path in data_paths]
classNames
['Angelina Jolie', 
'Brad Pitt', 
'Denzel Washington', 
'Hugh Jackman',
'Jennifer Lawrence', 
'Johnny Depp', 
'Kate Winslet', 
'Leonardo DiCaprio', 
'Megan Fox', 
'Natalie Portman',
'Nicole Kidman', 
'Robert Downey Jr',
'Sandra Bullock', 
'Scarlett Johansson',
'Tom Cruise',
'Tom Hanks',
'Will Smith']
train_transforms = transforms.Compose([
    transforms.Resize([224,224]),# resize輸入圖片
    transforms.ToTensor(), # 將PIL Image或numpy.ndarray轉(zhuǎn)換成tensor
    transforms.Normalize(
        mean = [0.485, 0.456, 0.406],
        std = [0.229,0.224,0.225]) # 從數(shù)據(jù)集中隨機(jī)抽樣計(jì)算得到
])
 
total_data = datasets.ImageFolder(data_dir,transform=train_transforms)
total_data
Dataset ImageFolder
    Number of datapoints: 1800
    Root location: hlw
    StandardTransform
Transform: Compose(
               Resize(size=[224, 224], interpolation=PIL.Image.BILINEAR)
               ToTensor()
               Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
           )

3.數(shù)據(jù)集劃分

train_size = int(0.8*len(total_data))
test_size = len(total_data) - train_size
train_dataset, test_dataset = torch.utils.data.random_split(total_data,[train_size,test_size])
train_dataset,test_dataset

(<torch.utils.data.dataset.Subset at 0x12f8aceda00>, <torch.utils.data.dataset.Subset at 0x12f8acedac0>)

train_size,test_size

(1440, 360)

batch_size = 32
train_dl = torch.utils.data.DataLoader(train_dataset,
                                       batch_size=batch_size,
                                       shuffle=True,
                                       num_workers=1)
test_dl = torch.utils.data.DataLoader(test_dataset,
                                       batch_size=batch_size,
                                       shuffle=True,
                                       num_workers=1)

4. 數(shù)據(jù)可視化

imgs, labels = next(iter(train_dl))
imgs.shape

 torch.Size([32, 3, 224, 224])

import numpy as np
 
 # 指定圖片大小,圖像大小為20寬、5高的繪圖(單位為英寸inch)
plt.figure(figsize=(20, 5)) 
for i, imgs in enumerate(imgs[:20]):
    npimg = imgs.numpy().transpose((1,2,0))
    npimg = npimg * np.array((0.229, 0.224, 0.225)) + np.array((0.485, 0.456, 0.406))
    npimg = npimg.clip(0, 1)
    # 將整個(gè)figure分成2行10列,繪制第i+1個(gè)子圖。
    plt.subplot(2, 10, i+1)
    plt.imshow(npimg)
    plt.axis('off')

for X,y in test_dl:
    print('Shape of X [N, C, H, W]:', X.shape)
    print('Shape of y:', y.shape)
    break

Shape of X [N, C, H, W]: torch.Size([32, 3, 224, 224])Shape of y: torch.Size([32])

二、構(gòu)建簡(jiǎn)單的CNN網(wǎng)絡(luò)

from torchvision.models import vgg16
    
model = vgg16(pretrained = True).to(device)
for param in model.parameters():
    param.requires_grad = False
 
model.classifier._modules['6'] = nn.Linear(4096,len(classNames))
 
model.to(device)
model
VGG(
  (features): Sequential(
    (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU(inplace=True)
    (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (3): ReLU(inplace=True)
    (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (6): ReLU(inplace=True)
    (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (8): ReLU(inplace=True)
    (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (11): ReLU(inplace=True)
    (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (13): ReLU(inplace=True)
    (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (15): ReLU(inplace=True)
    (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (18): ReLU(inplace=True)
    (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (20): ReLU(inplace=True)
    (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (22): ReLU(inplace=True)
    (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (25): ReLU(inplace=True)
    (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (27): ReLU(inplace=True)
    (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (29): ReLU(inplace=True)
    (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
  (classifier): Sequential(
    (0): Linear(in_features=25088, out_features=4096, bias=True)
    (1): ReLU(inplace=True)
    (2): Dropout(p=0.5, inplace=False)
    (3): Linear(in_features=4096, out_features=4096, bias=True)
    (4): ReLU(inplace=True)
    (5): Dropout(p=0.5, inplace=False)
    (6): Linear(in_features=4096, out_features=17, bias=True)
  )
)
# 查看要訓(xùn)練的層
params_to_update = model.parameters()
# params_to_update = []
for name,param in model.named_parameters():
    if param.requires_grad == True:
#         params_to_update.append(param)
        print("\t",name)

三、訓(xùn)練模型

1.優(yōu)化器設(shè)置

# 優(yōu)化器設(shè)置
optimizer = torch.optim.Adam(params_to_update, lr=1e-4)#要訓(xùn)練什么參數(shù)/
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.92)#學(xué)習(xí)率每5個(gè)epoch衰減成原來(lái)的1/10
loss_fn = nn.CrossEntropyLoss()

2.編寫(xiě)訓(xùn)練函數(shù)

# 訓(xùn)練循環(huán)
def train(dataloader, model, loss_fn, optimizer):
    size = len(dataloader.dataset)  # 訓(xùn)練集的大小,一共900張圖片
    num_batches = len(dataloader)   # 批次數(shù)目,29(900/32)
 
    train_loss, train_acc = 0, 0  # 初始化訓(xùn)練損失和正確率
    
    for X, y in dataloader:  # 獲取圖片及其標(biāo)簽
        X, y = X.to(device), y.to(device)
        
        # 計(jì)算預(yù)測(cè)誤差
        pred = model(X)          # 網(wǎng)絡(luò)輸出
        loss = loss_fn(pred, y)  # 計(jì)算網(wǎng)絡(luò)輸出和真實(shí)值之間的差距,targets為真實(shí)值,計(jì)算二者差值即為損失
        
        # 反向傳播
        optimizer.zero_grad()  # grad屬性歸零
        loss.backward()        # 反向傳播
        optimizer.step()       # 每一步自動(dòng)更新
        
        # 記錄acc與loss
        train_acc  += (pred.argmax(1) == y).type(torch.float).sum().item()
        train_loss += loss.item()
            
    train_acc  /= size
    train_loss /= num_batches
 
    return train_acc, train_loss

3.編寫(xiě)測(cè)試函數(shù)

def test (dataloader, model, loss_fn):
    size        = len(dataloader.dataset)  # 測(cè)試集的大小,一共10000張圖片
    num_batches = len(dataloader)          # 批次數(shù)目,8(255/32=8,向上取整)
    test_loss, test_acc = 0, 0
    
    # 當(dāng)不進(jìn)行訓(xùn)練時(shí),停止梯度更新,節(jié)省計(jì)算內(nèi)存消耗
    with torch.no_grad():
        for imgs, target in dataloader:
            imgs, target = imgs.to(device), target.to(device)
            
            # 計(jì)算loss
            target_pred = model(imgs)
            loss        = loss_fn(target_pred, target)
            
            test_loss += loss.item()
            test_acc  += (target_pred.argmax(1) == target).type(torch.float).sum().item()
 
    test_acc  /= size
    test_loss /= num_batches
 
    return test_acc, test_loss

4、正式訓(xùn)練

訓(xùn)練輸出層

epochs     = 20
train_loss = []
train_acc  = []
test_loss  = []
test_acc   = []
best_acc = 0
filename='checkpoint.pth'
 
for epoch in range(epochs):
    model.train()
    epoch_train_acc, epoch_train_loss = train(train_dl, model, loss_fn, optimizer)
    
    scheduler.step()#學(xué)習(xí)率衰減
    
    model.eval()
    epoch_test_acc, epoch_test_loss = test(test_dl, model, loss_fn)
    
    # 保存最優(yōu)模型
    if epoch_test_acc > best_acc:
        best_acc = epoch_train_acc
        state = {
            'state_dict': model.state_dict(),#字典里key就是各層的名字,值就是訓(xùn)練好的權(quán)重
            'best_acc': best_acc,
            'optimizer' : optimizer.state_dict(),
        }
        torch.save(state, filename)
        
        
    train_acc.append(epoch_train_acc)
    train_loss.append(epoch_train_loss)
    test_acc.append(epoch_test_acc)
    test_loss.append(epoch_test_loss)
    
    template = ('Epoch:{:2d}, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%,Test_loss:{:.3f}')
    print(template.format(epoch+1, epoch_train_acc*100, epoch_train_loss, epoch_test_acc*100, epoch_test_loss))
print('Done')
print('best_acc:',best_acc)

Epoch: 1, Train_acc:12.2%, Train_loss:2.701, Test_acc:13.9%,Test_loss:2.544

Epoch: 2, Train_acc:20.8%, Train_loss:2.386, Test_acc:20.6%,Test_loss:2.377

Epoch: 3, Train_acc:26.1%, Train_loss:2.228, Test_acc:22.5%,Test_loss:2.274...

Epoch:19, Train_acc:51.6%, Train_loss:1.528, Test_acc:35.8%,Test_loss:1.864

Epoch:20, Train_acc:53.9%, Train_loss:1.499, Test_acc:35.3%,Test_loss:1.852

Done

best_acc: 0.37430555555555556

繼續(xù)訓(xùn)練所有層

for param in model.parameters():
    param.requires_grad = True
 
# 再繼續(xù)訓(xùn)練所有的參數(shù),學(xué)習(xí)率調(diào)小一點(diǎn)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.92)
 
# 損失函數(shù)
criterion = nn.CrossEntropyLoss()
# 加載之前訓(xùn)練好的權(quán)重參數(shù)
checkpoint = torch.load(filename)
best_acc = checkpoint['best_acc']
model.load_state_dict(checkpoint['state_dict'])
epochs     = 20
train_loss = []
train_acc  = []
test_loss  = []
test_acc   = []
best_acc = 0
filename='best_vgg16.pth'
 
for epoch in range(epochs):
    model.train()
    epoch_train_acc, epoch_train_loss = train(train_dl, model, loss_fn, optimizer)
    
    scheduler.step()#學(xué)習(xí)率衰減
    
    model.eval()
    epoch_test_acc, epoch_test_loss = test(test_dl, model, loss_fn)
    
    # 保存最優(yōu)模型
    if epoch_test_acc > best_acc:
        best_acc = epoch_test_acc
        state = {
            'state_dict': model.state_dict(),#字典里key就是各層的名字,值就是訓(xùn)練好的權(quán)重
            'best_acc': best_acc,
            'optimizer' : optimizer.state_dict(),
        }
        torch.save(state, filename)
        
        
    train_acc.append(epoch_train_acc)
    train_loss.append(epoch_train_loss)
    test_acc.append(epoch_test_acc)
    test_loss.append(epoch_test_loss)
    
    template = ('Epoch:{:2d}, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%,Test_loss:{:.3f}')
    print(template.format(epoch+1, epoch_train_acc*100, epoch_train_loss, epoch_test_acc*100, epoch_test_loss))
print('Done')
print('best_acc:',best_acc)

Epoch: 1, Train_acc:41.0%, Train_loss:1.654, Test_acc:57.5%,Test_loss:1.301

Epoch: 2, Train_acc:72.3%, Train_loss:0.781, Test_acc:58.9%,Test_loss:1.139

Epoch: 3, Train_acc:87.0%, Train_loss:0.381, Test_acc:67.8%,Test_loss:1.079

...

Epoch:19, Train_acc:99.3%, Train_loss:0.033, Test_acc:74.2%,Test_loss:0.895

Epoch:20, Train_acc:99.9%, Train_loss:0.003, Test_acc:74.4%,Test_loss:1.001

Done

best_acc: 0.7666666666666667

四、結(jié)果可視化

import matplotlib.pyplot as plt
#隱藏警告
import warnings
warnings.filterwarnings("ignore")               #忽略警告信息
plt.rcParams['font.sans-serif']    = ['SimHei'] # 用來(lái)正常顯示中文標(biāo)簽
plt.rcParams['axes.unicode_minus'] = False      # 用來(lái)正常顯示負(fù)號(hào)
plt.rcParams['figure.dpi']         = 100        #分辨率
 
epochs_range = range(epochs)
 
plt.figure(figsize=(12, 3))
plt.subplot(1, 2, 1)
 
plt.plot(epochs_range, train_acc, label='Training Accuracy')
plt.plot(epochs_range, test_acc, label='Test Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
 
plt.subplot(1, 2, 2)
plt.plot(epochs_range, train_loss, label='Training Loss')
plt.plot(epochs_range, test_loss, label='Test Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

到此這篇關(guān)于pytorch實(shí)現(xiàn)好萊塢明星識(shí)別的文章就介紹到這了,更多相關(guān)pytorch實(shí)現(xiàn)好萊塢明星識(shí)別內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!

相關(guān)文章

  • python科學(xué)計(jì)算之scipy——optimize用法

    python科學(xué)計(jì)算之scipy——optimize用法

    今天小編就為大家分享一篇python科學(xué)計(jì)算之scipy——optimize用法,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過(guò)來(lái)看看吧
    2019-11-11
  • 零基礎(chǔ)寫(xiě)python爬蟲(chóng)之使用urllib2組件抓取網(wǎng)頁(yè)內(nèi)容

    零基礎(chǔ)寫(xiě)python爬蟲(chóng)之使用urllib2組件抓取網(wǎng)頁(yè)內(nèi)容

    文章詳細(xì)介紹了在python2.5環(huán)境下,如何使用urllib2這個(gè)python自帶的組件進(jìn)行抓取指定網(wǎng)頁(yè)內(nèi)容的,整個(gè)過(guò)程記錄的非常的詳細(xì),也很簡(jiǎn)單,有需要的朋友可以參考下,寫(xiě)出自己的python爬蟲(chóng)
    2014-11-11
  • 基于python求兩個(gè)列表的并集.交集.差集

    基于python求兩個(gè)列表的并集.交集.差集

    這篇文章主要介紹了基于python求兩個(gè)列表的并集.交集.差集,文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友可以參考下
    2020-02-02
  • 詳解Python?matplotlib中的色彩使用詳解

    詳解Python?matplotlib中的色彩使用詳解

    matplotlib中提供了一些常見(jiàn)顏色的字符串,并封裝成了幾個(gè)顏色字典,這篇文章主要來(lái)和大家講解一下matplotlib中的色彩使用,需要的可以參考一下
    2023-07-07
  • 講解如何利用 Python完成 Saga 分布式事務(wù)

    講解如何利用 Python完成 Saga 分布式事務(wù)

    這篇文章主要介紹了如何利用 Python 完成一個(gè) Saga 的分布式事務(wù),需要的朋友可以參考下面文章具體的內(nèi)容
    2021-09-09
  • Python import自己的模塊報(bào)錯(cuò)問(wèn)題及解決

    Python import自己的模塊報(bào)錯(cuò)問(wèn)題及解決

    這篇文章主要介紹了Python import自己的模塊報(bào)錯(cuò)問(wèn)題及解決方案,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。如有錯(cuò)誤或未考慮完全的地方,望不吝賜教
    2022-02-02
  • 使用Python讀取.nc文件的方法詳解

    使用Python讀取.nc文件的方法詳解

    .nc文件,即NetCDF(Network?Common?Data?Form)文件,是一種用于存儲(chǔ)科學(xué)數(shù)據(jù)的文件格式,本文主要為大家介紹了兩種常見(jiàn)的讀取方法,希望對(duì)大家有所幫助
    2024-03-03
  • Python實(shí)現(xiàn)GIF圖倒放

    Python實(shí)現(xiàn)GIF圖倒放

    這篇文章主要介紹了Python如何實(shí)現(xiàn)GIF圖倒放,文中講解非常細(xì)致,幫助大家更好的理解和學(xué)習(xí),感興趣的朋友可以了解下
    2020-07-07
  • Python 時(shí)間處理datetime實(shí)例

    Python 時(shí)間處理datetime實(shí)例

    Python Cook書(shū)中有很多章節(jié)都是針對(duì)某個(gè)庫(kù)的使用進(jìn)行介紹或是通過(guò)組合多個(gè)函數(shù)實(shí)現(xiàn)一些復(fù)雜的功能。我這里直接跳過(guò)了上一章節(jié)中對(duì)于文件處理的一些章節(jié),直接進(jìn)入對(duì)時(shí)間操作的章節(jié)。
    2008-09-09
  • 解決pycharm 誤刪掉項(xiàng)目文件的處理方法

    解決pycharm 誤刪掉項(xiàng)目文件的處理方法

    今天小編就為大家分享一篇解決pycharm 誤刪掉項(xiàng)目文件的處理方法,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過(guò)來(lái)看看吧
    2018-10-10

最新評(píng)論