欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

Python與CNN的碰撞詳解

 更新時間:2022年12月14日 11:19:32   作者:石奧博123  
CNN,即卷積神經(jīng)網(wǎng)絡(luò),主要用于圖像識別,分類。由輸入層,卷積層,池化層,全連接層(Affline層),Softmax層疊加而成。卷積神經(jīng)網(wǎng)絡(luò)中還有一個非常重要的結(jié)構(gòu):過濾器,它作用于層與層之間(卷積層與池化層),決定了怎樣對數(shù)據(jù)進(jìn)行卷積和池化

AlexNet介紹

AlexNet是2012年ISLVRC 2012(ImageNet Large Scale Visual RecognitionChallenge)競賽的冠軍網(wǎng)絡(luò),分類準(zhǔn)確率由傳統(tǒng)的 70%+提升到 80%+。它是由Hinton和他的學(xué)生Alex Krizhevsky設(shè)計(jì)的。也是在那年之后,深度學(xué)習(xí)開始迅速發(fā)展。

idea

(1)首次利用 GPU 進(jìn)行網(wǎng)絡(luò)加速訓(xùn)練。

(2)使用了 ReLU 激活函數(shù),而不是傳統(tǒng)的 Sigmoid 激活函數(shù)以及 Tanh 激活函數(shù)。

(3)使用了 LRN 局部響應(yīng)歸一化。

(4)在全連接層的前兩層中使用了 Dropout 隨機(jī)失活神經(jīng)元操作,以減少過擬合。

過擬合

根本原因是特征維度過多,模型假設(shè)過于復(fù)雜,參數(shù)過多,訓(xùn)練數(shù)據(jù)過少,噪聲過多,導(dǎo)致擬合的函數(shù)完美的預(yù)測訓(xùn)練集,但對新數(shù)據(jù)的測試集預(yù)測結(jié)果差。 過度的擬合了訓(xùn)練數(shù)據(jù),而沒有考慮到泛化能力。

解決方案

使用 Dropout 的方式在網(wǎng)絡(luò)正向傳播過程中隨機(jī)失活一部分神經(jīng)元。

卷積后矩陣尺寸計(jì)算公式

經(jīng)卷積后的矩陣尺寸大小計(jì)算公式為: N = (W − F + 2P ) / S + 1

① 輸入圖片大小 W×W

② Filter大小 F×F

③ 步長 S

④ padding的像素?cái)?shù) P

AlexNet網(wǎng)絡(luò)結(jié)構(gòu)

layer_namekernel_sizekernel_numpaddingstride
Conv11196[1, 2]4
Maxpool13None02
Conv25256[2, 2]1
Maxpool23None02
Conv33384[1, 1]1
Conv43384[1, 1]1
Conv53256[1, 1]1
Maxpool33None02
FC12048NoneNoneNone
FC22048NoneNoneNone
FC31000NoneNoneNone

model代碼

from tensorflow.keras import layers, models, Model, Sequential
def AlexNet_v1(im_height=224, im_width=224, num_classes=1000):
    # tensorflow中的tensor通道排序是NHWC
    input_image = layers.Input(shape=(im_height, im_width, 3), dtype="float32")  # output(None, 224, 224, 3)
    x = layers.ZeroPadding2D(((1, 2), (1, 2)))(input_image)                      # output(None, 227, 227, 3)
    x = layers.Conv2D(48, kernel_size=11, strides=4, activation="relu")(x)       # output(None, 55, 55, 48)
    x = layers.MaxPool2D(pool_size=3, strides=2)(x)                              # output(None, 27, 27, 48)
    x = layers.Conv2D(128, kernel_size=5, padding="same", activation="relu")(x)  # output(None, 27, 27, 128)
    x = layers.MaxPool2D(pool_size=3, strides=2)(x)                              # output(None, 13, 13, 128)
    x = layers.Conv2D(192, kernel_size=3, padding="same", activation="relu")(x)  # output(None, 13, 13, 192)
    x = layers.Conv2D(192, kernel_size=3, padding="same", activation="relu")(x)  # output(None, 13, 13, 192)
    x = layers.Conv2D(128, kernel_size=3, padding="same", activation="relu")(x)  # output(None, 13, 13, 128)
    x = layers.MaxPool2D(pool_size=3, strides=2)(x)                              # output(None, 6, 6, 128)
    x = layers.Flatten()(x)                         # output(None, 6*6*128)
    x = layers.Dropout(0.2)(x)
    x = layers.Dense(2048, activation="relu")(x)    # output(None, 2048)
    x = layers.Dropout(0.2)(x)
    x = layers.Dense(2048, activation="relu")(x)    # output(None, 2048)
    x = layers.Dense(num_classes)(x)                  # output(None, 5)
    predict = layers.Softmax()(x)
    model = models.Model(inputs=input_image, outputs=predict)
    return model
class AlexNet_v2(Model):
    def __init__(self, num_classes=1000):
        super(AlexNet_v2, self).__init__()
        self.features = Sequential([
            layers.ZeroPadding2D(((1, 2), (1, 2))),                                 # output(None, 227, 227, 3)
            layers.Conv2D(48, kernel_size=11, strides=4, activation="relu"),        # output(None, 55, 55, 48)
            layers.MaxPool2D(pool_size=3, strides=2),                               # output(None, 27, 27, 48)
            layers.Conv2D(128, kernel_size=5, padding="same", activation="relu"),   # output(None, 27, 27, 128)
            layers.MaxPool2D(pool_size=3, strides=2),                               # output(None, 13, 13, 128)
            layers.Conv2D(192, kernel_size=3, padding="same", activation="relu"),   # output(None, 13, 13, 192)
            layers.Conv2D(192, kernel_size=3, padding="same", activation="relu"),   # output(None, 13, 13, 192)
            layers.Conv2D(128, kernel_size=3, padding="same", activation="relu"),   # output(None, 13, 13, 128)
            layers.MaxPool2D(pool_size=3, strides=2)])                              # output(None, 6, 6, 128)
        self.flatten = layers.Flatten()
        self.classifier = Sequential([
            layers.Dropout(0.2),
            layers.Dense(1024, activation="relu"),                                  # output(None, 2048)
            layers.Dropout(0.2),
            layers.Dense(128, activation="relu"),                                   # output(None, 2048)
            layers.Dense(num_classes),                                                # output(None, 5)
            layers.Softmax()
        ])
    def call(self, inputs, **kwargs):
        x = self.features(inputs)
        x = self.flatten(x)
        x = self.classifier(x)
        return x

VGGNet介紹

VGG在2014年由牛津大學(xué)著名研究組VGG (Visual Geometry Group) 提出,斬獲該年ImageNet競賽中 Localization Task (定位 任務(wù)) 第一名 和 Classification Task (分類任務(wù)) 第二名。

idea

通過堆疊多個3x3的卷積核來替代大尺度卷積核 (減少所需參數(shù)) 論文中提到,可以通過堆疊兩個3x3的卷 積核替代5x5的卷積核,堆疊三個3x3的 卷積核替代7x7的卷積核。

假設(shè)輸入輸出channel為C

7x7卷積核所需參數(shù):7 x 7 x C x C = 49C^2

3x3卷積核所需參數(shù):3 x 3 x C x C + 3 x 3 x C x C + 3 x 3 x C x C = 27C^2

感受野

在卷積神經(jīng)網(wǎng)絡(luò)中,決定某一層輸出 結(jié)果中一個元素所對應(yīng)的輸入層的區(qū)域大 小,被稱作感受野(receptive field)。通俗 的解釋是,輸出feature map上的一個單元 對應(yīng)輸入層上的區(qū)域大小。

感受野計(jì)算公式

F ( i ) =(F ( i + 1) -1) x Stride + Ksize

F(i)為第i層感受野, Stride為第i層的步距, Ksize為卷積核或采樣核尺寸

VGGNet網(wǎng)絡(luò)結(jié)構(gòu)

model代碼

from tensorflow.keras import layers, Model, Sequential
#import sort_pool2d
import tensorflow as tf
CONV_KERNEL_INITIALIZER = {
    'class_name': 'VarianceScaling',
    'config': {
        'scale': 2.0,
        'mode': 'fan_out',
        'distribution': 'truncated_normal'
    }
}
DENSE_KERNEL_INITIALIZER = {
    'class_name': 'VarianceScaling',
    'config': {
        'scale': 1. / 3.,
        'mode': 'fan_out',
        'distribution': 'uniform'
    }
}
def VGG(feature, im_height=224, im_width=224, num_classes=1000):
    # tensorflow中的tensor通道排序是NHWC
    input_image = layers.Input(shape=(im_height, im_width, 3), dtype="float32")
    x = feature(input_image)
    x = layers.Flatten()(x)
    x = layers.Dropout(rate=0.5)(x)
    x = layers.Dense(2048, activation='relu',
                     kernel_initializer=DENSE_KERNEL_INITIALIZER)(x)
    x = layers.Dropout(rate=0.5)(x)
    x = layers.Dense(2048, activation='relu',
                     kernel_initializer=DENSE_KERNEL_INITIALIZER)(x)
    x = layers.Dense(num_classes,
                     kernel_initializer=DENSE_KERNEL_INITIALIZER)(x)
    output = layers.Softmax()(x)
    model = Model(inputs=input_image, outputs=output)
    return model
def make_feature(cfg):
    feature_layers = []
    for v in cfg:
        if v == "M":
            feature_layers.append(layers.MaxPool2D(pool_size=2, strides=2))
        # elif v == "S":
        #     feature_layers.append(layers.sort_pool2d(x))
        else:
            conv2d = layers.Conv2D(v, kernel_size=3, padding="SAME", activation="relu",
                                   kernel_initializer=CONV_KERNEL_INITIALIZER)
            feature_layers.append(conv2d)
    return Sequential(feature_layers, name="feature")
cfgs = {
    'vgg11': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
    'vgg13': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
    'vgg16': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'],
    'vgg19': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'],
}
def vgg(model_name="vgg16", im_height=224, im_width=224, num_classes=1000):
    assert model_name in cfgs.keys(), "not support model {}".format(model_name)
    cfg = cfgs[model_name]
    model = VGG(make_feature(cfg), im_height=im_height, im_width=im_width, num_classes=num_classes)
    return model

到此這篇關(guān)于Python與CNN的碰撞詳解的文章就介紹到這了,更多相關(guān)Python CNN內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!

相關(guān)文章

最新評論