欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

卷積神經(jīng)網(wǎng)絡(luò)(CNN)基于SqueezeNet的眼疾識別功能

 更新時(shí)間:2023年08月23日 15:09:50   作者:心無旁騖~  
SqueezeNet是一種輕量且高效的CNN模型,它參數(shù)比AlexNet少50倍,但模型性能(accuracy)與AlexNet接近,這篇文章主要介紹了卷積神經(jīng)網(wǎng)絡(luò)(CNN)基于SqueezeNet的眼疾識別,需要的朋友可以參考下

前言

SqueezeNet是一種輕量且高效的CNN模型,它參數(shù)比AlexNet少50倍,但模型性能(accuracy)與AlexNet接近。顧名思義,Squeeze的中文意思是壓縮和擠壓的意思,所以我們通過算法的名字就可以猜想到,該算法一定是通過壓縮模型來降低模型參數(shù)量的。當(dāng)然任何算法的改進(jìn)都是在原先的基礎(chǔ)上提升精度或者降低模型參數(shù),因此該算法的主要目的就是在于降低模型參數(shù)量的同時(shí)保持模型精度。

我的環(huán)境:

  • 基礎(chǔ)環(huán)境:python3.7
  • 編譯器:pycharm
  • 深度學(xué)習(xí)框架:pytorch
  • 數(shù)據(jù)集代碼獲?。?/li>
  • 鏈接: https://pan.baidu.com/s/1oIyIWuJRmMBds55vEpYg-w 提取碼: 3vxr

本案例使用的數(shù)據(jù)集是眼疾識別數(shù)據(jù)集iChallenge-PM。

1.1 數(shù)據(jù)集介紹

iChallenge-PM 是百度大腦和中山大學(xué)中山眼科中心聯(lián)合舉辦的iChallenge比賽中,提供的關(guān)于病理性近視(Pathologic Myopia,PM)的醫(yī)療類數(shù)據(jù)集,包含1200個(gè)受試者的眼底視網(wǎng)膜圖片,訓(xùn)練、驗(yàn)證和測試數(shù)據(jù)集各400張。

  • training.zip:包含訓(xùn)練中的圖片和標(biāo)簽
  • validation.zip:包含驗(yàn)證集的圖片
  • valid_gt.zip:包含驗(yàn)證集的標(biāo)簽

該數(shù)據(jù)集是從AI Studio平臺中下載的,具體信息如下:

在這里插入圖片描述

1.2 數(shù)據(jù)集文件結(jié)構(gòu)

數(shù)據(jù)集中共有三個(gè)壓縮文件,分別是:

training.zip

├── PALM-Training400
│   ├── PALM-Training400.zip
│   │   ├── H0002.jpg
│   │   └── ...
│   ├── PALM-Training400-Annotation-D&F.zip
│   │   └── ...
│   └── PALM-Training400-Annotation-Lession.zip
        └── ...

valid_gt.zip:標(biāo)記的位置 里面的PM_Lable_and_Fovea_Location.xlsx就是標(biāo)記文件

├── PALM-Validation-GT
│   ├── Lession_Masks
│   │   └── ...
│   ├── Disc_Masks
│   │   └── ...
│   └── PM_Lable_and_Fovea_Location.xlsx

validation.zip:測試數(shù)據(jù)集

├── PALM-Validation
│   ├── V0001.jpg
│   ├── V0002.jpg
│   └── ...

二、項(xiàng)目實(shí)戰(zhàn)

項(xiàng)目結(jié)構(gòu)如下:

2.1 數(shù)據(jù)標(biāo)簽劃分

該眼疾數(shù)據(jù)集格式有點(diǎn)復(fù)雜,這里我對數(shù)據(jù)集進(jìn)行了自己的處理,將訓(xùn)練集和驗(yàn)證集寫入txt文本里面,分別對應(yīng)它的圖片路徑和標(biāo)簽。

import os
import pandas as pd
# 將訓(xùn)練集劃分標(biāo)簽
train_dataset = r"F:\SqueezeNet\data\PALM-Training400\PALM-Training400"
train_list = []
label_list = []
train_filenames = os.listdir(train_dataset)
for name in train_filenames:
    filepath = os.path.join(train_dataset, name)
    train_list.append(filepath)
    if name[0] == 'N' or name[0] == 'H':
        label = 0
        label_list.append(label)
    elif name[0] == 'P':
        label = 1
        label_list.append(label)
    else:
        raise('Error dataset!')
with open('F:/SqueezeNet/train.txt', 'w', encoding='UTF-8') as f:
    i = 0
    for train_img in train_list:
        f.write(str(train_img) + ' ' +str(label_list[i]))
        i += 1
        f.write('\n')
# 將驗(yàn)證集劃分標(biāo)簽
valid_dataset = r"F:\SqueezeNet\data\PALM-Validation400"
valid_filenames = os.listdir(valid_dataset)
valid_label = r"F:\SqueezeNet\data\PALM-Validation-GT\PM_Label_and_Fovea_Location.xlsx"
data = pd.read_excel(valid_label)
valid_data = data[['imgName', 'Label']].values.tolist()
with open('F:/SqueezeNet/valid.txt', 'w', encoding='UTF-8') as f:
    for valid_img in valid_data:
        f.write(str(valid_dataset) + '/' + valid_img[0] + ' ' + str(valid_img[1]))
        f.write('\n')

2.2 數(shù)據(jù)預(yù)處理

這里采用到的數(shù)據(jù)預(yù)處理,主要有調(diào)整圖像大小、隨機(jī)翻轉(zhuǎn)、歸一化等。

import os.path
from PIL import Image
from torch.utils.data import DataLoader, Dataset
from torchvision.transforms import transforms
transform_BZ = transforms.Normalize(
    mean=[0.5, 0.5, 0.5],
    std=[0.5, 0.5, 0.5]
)
class LoadData(Dataset):
    def __init__(self, txt_path, train_flag=True):
        self.imgs_info = self.get_images(txt_path)
        self.train_flag = train_flag
        self.train_tf = transforms.Compose([
            transforms.Resize(224),  # 調(diào)整圖像大小為224x224
            transforms.RandomHorizontalFlip(),  # 隨機(jī)左右翻轉(zhuǎn)圖像
            transforms.RandomVerticalFlip(),  # 隨機(jī)上下翻轉(zhuǎn)圖像
            transforms.ToTensor(),  # 將PIL Image或numpy.ndarray轉(zhuǎn)換為tensor,并歸一化到[0,1]之間
            transform_BZ  # 執(zhí)行某些復(fù)雜變換操作
        ])
        self.val_tf = transforms.Compose([
            transforms.Resize(224),  # 調(diào)整圖像大小為224x224
            transforms.ToTensor(),  # 將PIL Image或numpy.ndarray轉(zhuǎn)換為tensor,并歸一化到[0,1]之間
            transform_BZ  # 執(zhí)行某些復(fù)雜變換操作
        ])
    def get_images(self, txt_path):
        with open(txt_path, 'r', encoding='utf-8') as f:
            imgs_info = f.readlines()
            imgs_info = list(map(lambda x: x.strip().split(' '), imgs_info))
        return imgs_info
    def padding_black(self, img):
        w, h = img.size
        scale = 224. / max(w, h)
        img_fg = img.resize([int(x) for x in [w * scale, h * scale]])
        size_fg = img_fg.size
        size_bg = 224
        img_bg = Image.new("RGB", (size_bg, size_bg))
        img_bg.paste(img_fg, ((size_bg - size_fg[0]) // 2,
                              (size_bg - size_fg[1]) // 2))
        img = img_bg
        return img
    def __getitem__(self, index):
        img_path, label = self.imgs_info[index]
        img_path = os.path.join('', img_path)
        img = Image.open(img_path)
        img = img.convert("RGB")
        img = self.padding_black(img)
        if self.train_flag:
            img = self.train_tf(img)
        else:
            img = self.val_tf(img)
        label = int(label)
        return img, label
    def __len__(self):
        return len(self.imgs_info)

2.3 構(gòu)建模型

import torch
import torch.nn as nn
import torch.nn.init as init
class Fire(nn.Module):
    def __init__(self, inplanes, squeeze_planes,
                 expand1x1_planes, expand3x3_planes):
        super(Fire, self).__init__()
        self.inplanes = inplanes
        self.squeeze = nn.Conv2d(inplanes, squeeze_planes, kernel_size=1)
        self.squeeze_activation = nn.ReLU(inplace=True)
        self.expand1x1 = nn.Conv2d(squeeze_planes, expand1x1_planes,
                                   kernel_size=1)
        self.expand1x1_activation = nn.ReLU(inplace=True)
        self.expand3x3 = nn.Conv2d(squeeze_planes, expand3x3_planes,
                                   kernel_size=3, padding=1)
        self.expand3x3_activation = nn.ReLU(inplace=True)
    def forward(self, x):
        x = self.squeeze_activation(self.squeeze(x))
        return torch.cat([
            self.expand1x1_activation(self.expand1x1(x)),
            self.expand3x3_activation(self.expand3x3(x))
        ], 1)
class SqueezeNet(nn.Module):
    def __init__(self, version='1_0', num_classes=1000):
        super(SqueezeNet, self).__init__()
        self.num_classes = num_classes
        if version == '1_0':
            self.features = nn.Sequential(
                nn.Conv2d(3, 96, kernel_size=7, stride=2),
                nn.ReLU(inplace=True),
                nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
                Fire(96, 16, 64, 64),
                Fire(128, 16, 64, 64),
                Fire(128, 32, 128, 128),
                nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
                Fire(256, 32, 128, 128),
                Fire(256, 48, 192, 192),
                Fire(384, 48, 192, 192),
                Fire(384, 64, 256, 256),
                nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
                Fire(512, 64, 256, 256),
            )
        elif version == '1_1':
            self.features = nn.Sequential(
                nn.Conv2d(3, 64, kernel_size=3, stride=2),
                nn.ReLU(inplace=True),
                nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
                Fire(64, 16, 64, 64),
                Fire(128, 16, 64, 64),
                nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
                Fire(128, 32, 128, 128),
                Fire(256, 32, 128, 128),
                nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
                Fire(256, 48, 192, 192),
                Fire(384, 48, 192, 192),
                Fire(384, 64, 256, 256),
                Fire(512, 64, 256, 256),
            )
        else:
            # FIXME: Is this needed? SqueezeNet should only be called from the
            # FIXME: squeezenet1_x() functions
            # FIXME: This checking is not done for the other models
            raise ValueError("Unsupported SqueezeNet version {version}:"
                             "1_0 or 1_1 expected".format(version=version))
        # Final convolution is initialized differently from the rest
        final_conv = nn.Conv2d(512, self.num_classes, kernel_size=1)
        self.classifier = nn.Sequential(
            nn.Dropout(p=0.5),
            final_conv,
            nn.ReLU(inplace=True),
            nn.AdaptiveAvgPool2d((1, 1))
        )
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                if m is final_conv:
                    init.normal_(m.weight, mean=0.0, std=0.01)
                else:
                    init.kaiming_uniform_(m.weight)
                if m.bias is not None:
                    init.constant_(m.bias, 0)
    def forward(self, x):
        x = self.features(x)
        x = self.classifier(x)
        return torch.flatten(x, 1)

2.4 開始訓(xùn)練

import torch
import torch.nn as nn
from torch.utils.data import DataLoader, Dataset
from model import SqueezeNet
import torchsummary
from dataloader import LoadData
import copy
device = "cuda:0" if torch.cuda.is_available() else "cpu"
print("Using {} device".format(device))
model = SqueezeNet(num_classes=2).to(device)
# print(model)
#print(torchsummary.summary(model, (3, 224, 224), 1))
# 加載訓(xùn)練集和驗(yàn)證集
train_data = LoadData(r"F:\SqueezeNet\train.txt", True)
train_dl = torch.utils.data.DataLoader(train_data, batch_size=16, pin_memory=True,
                                           shuffle=True, num_workers=0)
test_data = LoadData(r"F:\SqueezeNet\valid.txt", True)
test_dl = torch.utils.data.DataLoader(test_data, batch_size=16, pin_memory=True,
                                           shuffle=True, num_workers=0)
# 編寫訓(xùn)練函數(shù)
def train(dataloader, model, loss_fn, optimizer):
    size = len(dataloader.dataset)  # 訓(xùn)練集的大小
    num_batches = len(dataloader)  # 批次數(shù)目, (size/batch_size,向上取整)
    print('num_batches:', num_batches)
    train_loss, train_acc = 0, 0  # 初始化訓(xùn)練損失和正確率
    for X, y in dataloader:  # 獲取圖片及其標(biāo)簽
        X, y = X.to(device), y.to(device)
        # 計(jì)算預(yù)測誤差
        pred = model(X)  # 網(wǎng)絡(luò)輸出
        loss = loss_fn(pred, y)  # 計(jì)算網(wǎng)絡(luò)輸出和真實(shí)值之間的差距,targets為真實(shí)值,計(jì)算二者差值即為損失
        # 反向傳播
        optimizer.zero_grad()  # grad屬性歸零
        loss.backward()  # 反向傳播
        optimizer.step()  # 每一步自動(dòng)更新
        # 記錄acc與loss
        train_acc += (pred.argmax(1) == y).type(torch.float).sum().item()
        train_loss += loss.item()
    train_acc /= size
    train_loss /= num_batches
    return train_acc, train_loss
# 編寫驗(yàn)證函數(shù)
def test(dataloader, model, loss_fn):
    size = len(dataloader.dataset)  # 測試集的大小
    num_batches = len(dataloader)  # 批次數(shù)目, (size/batch_size,向上取整)
    test_loss, test_acc = 0, 0
    # 當(dāng)不進(jìn)行訓(xùn)練時(shí),停止梯度更新,節(jié)省計(jì)算內(nèi)存消耗
    with torch.no_grad():
        for imgs, target in dataloader:
            imgs, target = imgs.to(device), target.to(device)
            # 計(jì)算loss
            target_pred = model(imgs)
            loss = loss_fn(target_pred, target)
            test_loss += loss.item()
            test_acc += (target_pred.argmax(1) == target).type(torch.float).sum().item()
    test_acc /= size
    test_loss /= num_batches
    return test_acc, test_loss
# 開始訓(xùn)練
epochs = 20
train_loss = []
train_acc = []
test_loss = []
test_acc = []
best_acc = 0  # 設(shè)置一個(gè)最佳準(zhǔn)確率,作為最佳模型的判別指標(biāo)
loss_function = nn.CrossEntropyLoss()  # 定義損失函數(shù)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001) # 定義Adam優(yōu)化器
for epoch in range(epochs):
    model.train()
    epoch_train_acc, epoch_train_loss = train(train_dl, model, loss_function, optimizer)
    model.eval()
    epoch_test_acc, epoch_test_loss = test(test_dl, model, loss_function)
    # 保存最佳模型到 best_model
    if epoch_test_acc > best_acc:
        best_acc = epoch_test_acc
        best_model = copy.deepcopy(model)
    train_acc.append(epoch_train_acc)
    train_loss.append(epoch_train_loss)
    test_acc.append(epoch_test_acc)
    test_loss.append(epoch_test_loss)
    # 獲取當(dāng)前的學(xué)習(xí)率
    lr = optimizer.state_dict()['param_groups'][0]['lr']
    template = ('Epoch:{:2d}, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%, Test_loss:{:.3f}, Lr:{:.2E}')
    print(template.format(epoch + 1, epoch_train_acc * 100, epoch_train_loss,
                          epoch_test_acc * 100, epoch_test_loss, lr))
# 保存最佳模型到文件中
PATH = './best_model.pth'  # 保存的參數(shù)文件名
torch.save(best_model.state_dict(), PATH)
print('Done')

在這里插入圖片描述

2.5 結(jié)果可視化

import matplotlib.pyplot as plt
#隱藏警告
import warnings
warnings.filterwarnings("ignore")               #忽略警告信息
plt.rcParams['font.sans-serif']    = ['SimHei'] # 用來正常顯示中文標(biāo)簽
plt.rcParams['axes.unicode_minus'] = False      # 用來正常顯示負(fù)號
plt.rcParams['figure.dpi']         = 100        #分辨率
epochs_range = range(epochs)
plt.figure(figsize=(12, 3))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, train_acc, label='Training Accuracy')
plt.plot(epochs_range, test_acc, label='Test Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Test Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, train_loss, label='Training Loss')
plt.plot(epochs_range, test_loss, label='Test Loss')
plt.legend(loc='upper right')
plt.title('Training and Test Loss')
plt.show()

可視化結(jié)果如下:

在這里插入圖片描述

可以自行調(diào)整學(xué)習(xí)率以及batch_size,這里我的超參數(shù)并沒有調(diào)整。

三、數(shù)據(jù)集個(gè)體預(yù)測

import matplotlib.pyplot as plt
from PIL import Image
from torchvision.transforms import transforms
from model import SqueezeNet
import torch
data_transform = transforms.Compose(
    [transforms.ToTensor(),
     transforms.Resize((224, 224)),
     transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
img = Image.open("F:\SqueezeNet\data\PALM-Validation400\V0008.jpg")
plt.imshow(img)
img = data_transform(img)
img = torch.unsqueeze(img, dim=0)
name = ['非病理性近視', '病理性近視']
model_weight_path = r"F:\SqueezeNet\best_model.pth"
model = SqueezeNet(num_classes=2)
model.load_state_dict(torch.load(model_weight_path))
model.eval()
with torch.no_grad():
    output = torch.squeeze(model(img))
    predict = torch.softmax(output, dim=0)
    # 獲得最大可能性索引
    predict_cla = torch.argmax(predict).numpy()
    print('索引為', predict_cla)
print('預(yù)測結(jié)果為:{},置信度為: {}'.format(name[predict_cla], predict[predict_cla].item()))
plt.show()

索引為 1
預(yù)測結(jié)果為:病理性近視,置信度為: 0.9768268465995789

在這里插入圖片描述

到此這篇關(guān)于卷積神經(jīng)網(wǎng)絡(luò)(CNN)基于SqueezeNet的眼疾識別的文章就介紹到這了,更多相關(guān)SqueezeNet的眼疾識別內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!

相關(guān)文章

最新評論