python神經(jīng)網(wǎng)絡(luò)ResNet50模型的復(fù)現(xiàn)詳解
什么是殘差網(wǎng)絡(luò)
最近看yolo3里面講到了殘差網(wǎng)絡(luò),對這個(gè)網(wǎng)絡(luò)結(jié)構(gòu)很感興趣,于是了解到這個(gè)網(wǎng)絡(luò)結(jié)構(gòu)最初的使用是在ResNet網(wǎng)絡(luò)里。
Residual net(殘差網(wǎng)絡(luò)):
將靠前若干層的某一層數(shù)據(jù)輸出直接跳過多層引入到后面數(shù)據(jù)層的輸入部分。
意味著后面的特征層的內(nèi)容會(huì)有一部分由其前面的某一層線性貢獻(xiàn)。
其結(jié)構(gòu)如下:
深度殘差網(wǎng)絡(luò)的設(shè)計(jì)是為了克服由于網(wǎng)絡(luò)深度加深而產(chǎn)生的學(xué)習(xí)效率變低與準(zhǔn)確率無法有效提升的問題。
什么是ResNet50模型
ResNet50有兩個(gè)基本的塊,分別名為Conv Block和Identity Block,其中Conv Block輸入和輸出的維度是不一樣的,所以不能連續(xù)串聯(lián),它的作用是改變網(wǎng)絡(luò)的維度;
Identity Block輸入維度和輸出維度相同,可以串聯(lián),用于加深網(wǎng)絡(luò)的。
Conv Block的結(jié)構(gòu)如下:
Identity Block的結(jié)構(gòu)如下:
這兩個(gè)都是殘差網(wǎng)絡(luò)結(jié)構(gòu)。
總的網(wǎng)絡(luò)結(jié)構(gòu)如下:
這樣看起來可能比較抽象,還有一副很好的我從網(wǎng)上找的圖,可以拉到最后面去看哈,放前面太占位置了。
ResNet50網(wǎng)絡(luò)部分實(shí)現(xiàn)代碼
#-------------------------------------------------------------# # ResNet50的網(wǎng)絡(luò)部分 #-------------------------------------------------------------# from __future__ import print_function import numpy as np from keras import layers from keras.layers import Input from keras.layers import Dense,Conv2D,MaxPooling2D,ZeroPadding2D,AveragePooling2D from keras.layers import Activation,BatchNormalization,Flatten from keras.models import Model from keras.preprocessing import image import keras.backend as K from keras.utils.data_utils import get_file from keras.applications.imagenet_utils import decode_predictions from keras.applications.imagenet_utils import preprocess_input def identity_block(input_tensor, kernel_size, filters, stage, block): filters1, filters2, filters3 = filters conv_name_base = 'res' + str(stage) + block + '_branch' bn_name_base = 'bn' + str(stage) + block + '_branch' x = Conv2D(filters1, (1, 1), name=conv_name_base + '2a')(input_tensor) x = BatchNormalization(name=bn_name_base + '2a')(x) x = Activation('relu')(x) x = Conv2D(filters2, kernel_size,padding='same', name=conv_name_base + '2b')(x) x = BatchNormalization(name=bn_name_base + '2b')(x) x = Activation('relu')(x) x = Conv2D(filters3, (1, 1), name=conv_name_base + '2c')(x) x = BatchNormalization(name=bn_name_base + '2c')(x) x = layers.add([x, input_tensor]) x = Activation('relu')(x) return x def conv_block(input_tensor, kernel_size, filters, stage, block, strides=(2, 2)): filters1, filters2, filters3 = filters conv_name_base = 'res' + str(stage) + block + '_branch' bn_name_base = 'bn' + str(stage) + block + '_branch' x = Conv2D(filters1, (1, 1), strides=strides, name=conv_name_base + '2a')(input_tensor) x = BatchNormalization(name=bn_name_base + '2a')(x) x = Activation('relu')(x) x = Conv2D(filters2, kernel_size, padding='same', name=conv_name_base + '2b')(x) x = BatchNormalization(name=bn_name_base + '2b')(x) x = Activation('relu')(x) x = Conv2D(filters3, (1, 1), name=conv_name_base + '2c')(x) x = BatchNormalization(name=bn_name_base + '2c')(x) shortcut = Conv2D(filters3, (1, 1), strides=strides, name=conv_name_base + '1')(input_tensor) shortcut = BatchNormalization(name=bn_name_base + '1')(shortcut) x = layers.add([x, shortcut]) x = Activation('relu')(x) return x def ResNet50(input_shape=[224,224,3],classes=1000): img_input = Input(shape=input_shape) x = ZeroPadding2D((3, 3))(img_input) x = Conv2D(64, (7, 7), strides=(2, 2), name='conv1')(x) x = BatchNormalization(name='bn_conv1')(x) x = Activation('relu')(x) x = MaxPooling2D((3, 3), strides=(2, 2))(x) x = conv_block(x, 3, [64, 64, 256], stage=2, block='a', strides=(1, 1)) x = identity_block(x, 3, [64, 64, 256], stage=2, block='b') x = identity_block(x, 3, [64, 64, 256], stage=2, block='c') x = conv_block(x, 3, [128, 128, 512], stage=3, block='a') x = identity_block(x, 3, [128, 128, 512], stage=3, block='b') x = identity_block(x, 3, [128, 128, 512], stage=3, block='c') x = identity_block(x, 3, [128, 128, 512], stage=3, block='d') x = conv_block(x, 3, [256, 256, 1024], stage=4, block='a') x = identity_block(x, 3, [256, 256, 1024], stage=4, block='b') x = identity_block(x, 3, [256, 256, 1024], stage=4, block='c') x = identity_block(x, 3, [256, 256, 1024], stage=4, block='d') x = identity_block(x, 3, [256, 256, 1024], stage=4, block='e') x = identity_block(x, 3, [256, 256, 1024], stage=4, block='f') x = conv_block(x, 3, [512, 512, 2048], stage=5, block='a') x = identity_block(x, 3, [512, 512, 2048], stage=5, block='b') x = identity_block(x, 3, [512, 512, 2048], stage=5, block='c') x = AveragePooling2D((7, 7), name='avg_pool')(x) x = Flatten()(x) x = Dense(classes, activation='softmax', name='fc1000')(x) model = Model(img_input, x, name='resnet50') model.load_weights("resnet50_weights_tf_dim_ordering_tf_kernels.h5") return model
圖片預(yù)測
建立網(wǎng)絡(luò)后,可以用以下的代碼進(jìn)行預(yù)測。
if __name__ == '__main__': model = ResNet50() model.summary() img_path = 'elephant.jpg' img = image.load_img(img_path, target_size=(224, 224)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) print('Input image shape:', x.shape) preds = model.predict(x) print('Predicted:', decode_predictions(preds))
預(yù)測所需的已經(jīng)訓(xùn)練好的ResNet50模型可以在https://github.com/fchollet/deep-learning-models/releases下載。非常方便。
預(yù)測結(jié)果為:
Predicted: [[('n01871265', 'tusker', 0.41107917), ('n02504458', 'African_elephant', 0.39015812), ('n02504013', 'Indian_elephant', 0.12260196), ('n03000247', 'chain_mail', 0.023176488), ('n02437312', 'Arabian_camel', 0.020982226)]]
以上就是python神經(jīng)網(wǎng)絡(luò)ResNet50模型的復(fù)現(xiàn)詳解的詳細(xì)內(nèi)容,更多關(guān)于ResNet50模型復(fù)現(xiàn)的資料請關(guān)注腳本之家其它相關(guān)文章!
相關(guān)文章
使用IDLE的Python shell窗口實(shí)例詳解
在本篇文章里小編給各位整理的是關(guān)于使用IDLE的Python shell窗口實(shí)例詳解內(nèi)容,有興趣的朋友們學(xué)習(xí)下。2019-11-11談?wù)凱ython:為什么類中的私有屬性可以在外部賦值并訪問
這篇文章主要介紹了談?wù)凱ython:為什么類中的私有屬性可以在外部賦值并訪問,具有很好的參考價(jià)值,希望對大家有所幫助。一起跟隨小編過來看看吧2020-03-03django配置DJANGO_SETTINGS_MODULE的實(shí)現(xiàn)
本文主要介紹了django配置DJANGO_SETTINGS_MODULE,文中通過示例代碼介紹的非常詳細(xì),具有一定的參考價(jià)值,感興趣的小伙伴們可以參考一下2022-03-03jupyter notebook插入本地圖片的實(shí)現(xiàn)
這篇文章主要介紹了jupyter notebook插入本地圖片的實(shí)現(xiàn),具有很好的參考價(jià)值,希望對大家有所幫助。一起跟隨小編過來看看吧2020-04-04