python神經(jīng)網(wǎng)絡(luò)MobileNetV2模型的復(fù)現(xiàn)詳解
什么是MobileNetV2模型
MobileNet它哥MobileNetV2也是很不錯的呢
MobileNet模型是Google針對手機等嵌入式設(shè)備提出的一種輕量級的深層神經(jīng)網(wǎng)絡(luò),其使用的核心思想便是depthwise separable convolution。
MobileNetV2是MobileNet的升級版,它具有兩個特征點:
1、Inverted residuals,在ResNet50里我們認識到一個結(jié)構(gòu),bottleneck design結(jié)構(gòu),在3x3網(wǎng)絡(luò)結(jié)構(gòu)前利用1x1卷積降維,在3x3網(wǎng)絡(luò)結(jié)構(gòu)后,利用1x1卷積升維,相比直接使用3x3網(wǎng)絡(luò)卷積效果更好,參數(shù)更少,先進行壓縮,再進行擴張。而在MobileNetV2網(wǎng)絡(luò)部分,其采用Inverted residuals結(jié)構(gòu),在3x3網(wǎng)絡(luò)結(jié)構(gòu)前利用1x1卷積升維,在3x3網(wǎng)絡(luò)結(jié)構(gòu)后,利用1x1卷積降維,先進行擴張,再進行壓縮。
2、Linear bottlenecks,為了避免Relu對特征的破壞,在在3x3網(wǎng)絡(luò)結(jié)構(gòu)前利用1x1卷積升維,在3x3網(wǎng)絡(luò)結(jié)構(gòu)后,再利用1x1卷積降維后,不再進行Relu6層,直接進行殘差網(wǎng)絡(luò)的加法。
整體網(wǎng)絡(luò)結(jié)構(gòu)如下:(其中bottleneck進行的操作就是上述的創(chuàng)新操作)
MobileNetV2網(wǎng)絡(luò)部分實現(xiàn)代碼
#-------------------------------------------------------------# # MobileNetV2的網(wǎng)絡(luò)部分 #-------------------------------------------------------------# import math import numpy as np import tensorflow as tf from tensorflow.keras import backend from keras import backend as K from keras.preprocessing import image from keras.models import Model from keras.layers.normalization import BatchNormalization from keras.layers import Conv2D, Add, ZeroPadding2D, GlobalAveragePooling2D, Dropout, Dense from keras.layers import MaxPooling2D,Activation,DepthwiseConv2D,Input,GlobalMaxPooling2D from keras.applications import imagenet_utils from keras.applications.imagenet_utils import decode_predictions from keras.utils.data_utils import get_file # TODO Change path to v1.1 BASE_WEIGHT_PATH = ('https://github.com/JonathanCMitchell/mobilenet_v2_keras/' 'releases/download/v1.1/') # relu6! def relu6(x): return K.relu(x, max_value=6) # 用于計算padding的大小 def correct_pad(inputs, kernel_size): img_dim = 1 input_size = backend.int_shape(inputs)[img_dim:(img_dim + 2)] if isinstance(kernel_size, int): kernel_size = (kernel_size, kernel_size) if input_size[0] is None: adjust = (1, 1) else: adjust = (1 - input_size[0] % 2, 1 - input_size[1] % 2) correct = (kernel_size[0] // 2, kernel_size[1] // 2) return ((correct[0] - adjust[0], correct[0]), (correct[1] - adjust[1], correct[1])) # 使其結(jié)果可以被8整除,因為使用到了膨脹系數(shù)α def _make_divisible(v, divisor, min_value=None): if min_value is None: min_value = divisor new_v = max(min_value, int(v + divisor / 2) // divisor * divisor) if new_v < 0.9 * v: new_v += divisor return new_v def MobileNetV2(input_shape=[224,224,3], alpha=1.0, include_top=True, weights='imagenet', classes=1000): rows = input_shape[0] img_input = Input(shape=input_shape) # stem部分 # 224,224,3 -> 112,112,32 first_block_filters = _make_divisible(32 * alpha, 8) x = ZeroPadding2D(padding=correct_pad(img_input, 3), name='Conv1_pad')(img_input) x = Conv2D(first_block_filters, kernel_size=3, strides=(2, 2), padding='valid', use_bias=False, name='Conv1')(x) x = BatchNormalization(epsilon=1e-3, momentum=0.999, name='bn_Conv1')(x) x = Activation(relu6, name='Conv1_relu')(x) # 112,112,32 -> 112,112,16 x = _inverted_res_block(x, filters=16, alpha=alpha, stride=1, expansion=1, block_id=0) # 112,112,16 -> 56,56,24 x = _inverted_res_block(x, filters=24, alpha=alpha, stride=2, expansion=6, block_id=1) x = _inverted_res_block(x, filters=24, alpha=alpha, stride=1, expansion=6, block_id=2) # 56,56,24 -> 28,28,32 x = _inverted_res_block(x, filters=32, alpha=alpha, stride=2, expansion=6, block_id=3) x = _inverted_res_block(x, filters=32, alpha=alpha, stride=1, expansion=6, block_id=4) x = _inverted_res_block(x, filters=32, alpha=alpha, stride=1, expansion=6, block_id=5) # 28,28,32 -> 14,14,64 x = _inverted_res_block(x, filters=64, alpha=alpha, stride=2, expansion=6, block_id=6) x = _inverted_res_block(x, filters=64, alpha=alpha, stride=1, expansion=6, block_id=7) x = _inverted_res_block(x, filters=64, alpha=alpha, stride=1, expansion=6, block_id=8) x = _inverted_res_block(x, filters=64, alpha=alpha, stride=1, expansion=6, block_id=9) # 14,14,64 -> 14,14,96 x = _inverted_res_block(x, filters=96, alpha=alpha, stride=1, expansion=6, block_id=10) x = _inverted_res_block(x, filters=96, alpha=alpha, stride=1, expansion=6, block_id=11) x = _inverted_res_block(x, filters=96, alpha=alpha, stride=1, expansion=6, block_id=12) # 14,14,96 -> 7,7,160 x = _inverted_res_block(x, filters=160, alpha=alpha, stride=2, expansion=6, block_id=13) x = _inverted_res_block(x, filters=160, alpha=alpha, stride=1, expansion=6, block_id=14) x = _inverted_res_block(x, filters=160, alpha=alpha, stride=1, expansion=6, block_id=15) # 7,7,160 -> 7,7,320 x = _inverted_res_block(x, filters=320, alpha=alpha, stride=1, expansion=6, block_id=16) if alpha > 1.0: last_block_filters = _make_divisible(1280 * alpha, 8) else: last_block_filters = 1280 # 7,7,320 -> 7,7,1280 x = Conv2D(last_block_filters, kernel_size=1, use_bias=False, name='Conv_1')(x) x = BatchNormalization(epsilon=1e-3, momentum=0.999, name='Conv_1_bn')(x) x = Activation(relu6, name='out_relu')(x) x = GlobalAveragePooling2D()(x) x = Dense(classes, activation='softmax', use_bias=True, name='Logits')(x) inputs = img_input model = Model(inputs, x, name='mobilenetv2_%0.2f_%s' % (alpha, rows)) # Load weights. if weights == 'imagenet': if include_top: model_name = ('mobilenet_v2_weights_tf_dim_ordering_tf_kernels_' + str(alpha) + '_' + str(rows) + '.h5') weight_path = BASE_WEIGHT_PATH + model_name weights_path = get_file( model_name, weight_path, cache_subdir='models') else: model_name = ('mobilenet_v2_weights_tf_dim_ordering_tf_kernels_' + str(alpha) + '_' + str(rows) + '_no_top' + '.h5') weight_path = BASE_WEIGHT_PATH + model_name weights_path = get_file( model_name, weight_path, cache_subdir='models') model.load_weights(weights_path) elif weights is not None: model.load_weights(weights) return model def _inverted_res_block(inputs, expansion, stride, alpha, filters, block_id): in_channels = backend.int_shape(inputs)[-1] pointwise_conv_filters = int(filters * alpha) pointwise_filters = _make_divisible(pointwise_conv_filters, 8) x = inputs prefix = 'block_{}_'.format(block_id) # part1 數(shù)據(jù)擴張 if block_id: # Expand x = Conv2D(expansion * in_channels, kernel_size=1, padding='same', use_bias=False, activation=None, name=prefix + 'expand')(x) x = BatchNormalization(epsilon=1e-3, momentum=0.999, name=prefix + 'expand_BN')(x) x = Activation(relu6, name=prefix + 'expand_relu')(x) else: prefix = 'expanded_conv_' if stride == 2: x = ZeroPadding2D(padding=correct_pad(x, 3), name=prefix + 'pad')(x) # part2 可分離卷積 x = DepthwiseConv2D(kernel_size=3, strides=stride, activation=None, use_bias=False, padding='same' if stride == 1 else 'valid', name=prefix + 'depthwise')(x) x = BatchNormalization(epsilon=1e-3, momentum=0.999, name=prefix + 'depthwise_BN')(x) x = Activation(relu6, name=prefix + 'depthwise_relu')(x) # part3壓縮特征,而且不使用relu函數(shù),保證特征不被破壞 x = Conv2D(pointwise_filters, kernel_size=1, padding='same', use_bias=False, activation=None, name=prefix + 'project')(x) x = BatchNormalization(epsilon=1e-3, momentum=0.999, name=prefix + 'project_BN')(x) if in_channels == pointwise_filters and stride == 1: return Add(name=prefix + 'add')([inputs, x]) return x
圖片預(yù)測
建立網(wǎng)絡(luò)后,可以用以下的代碼進行預(yù)測。
def preprocess_input(x): x /= 255. x -= 0.5 x *= 2. return x if __name__ == '__main__': model = MobileNetV2(input_shape=(224, 224, 3)) model.summary() img_path = 'elephant.jpg' img = image.load_img(img_path, target_size=(224, 224)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) print('Input image shape:', x.shape) preds = model.predict(x) print(np.argmax(preds)) print('Predicted:', decode_predictions(preds, 1))
預(yù)測所需的已經(jīng)訓(xùn)練好的MobileNetV2模型會在運行時自動下載,下載后的模型位于C:\Users\Administrator.keras\models文件夾內(nèi)。
可以修改MobileNetV2內(nèi)不同的alpha值實現(xiàn)不同depth的MobileNetV2模型??蛇x的alpha值有:
Top-1 | Top-5 | 10-5 | Size | Stem | |
---|---|---|---|---|---|
MobileNetV2(alpha=0.35) | 39.914 | 17.568 | 15.422 | 1.7M | 0.4M |
MobileNetV2(alpha=0.50) | 34.806 | 13.938 | 11.976 | 2.0M | 0.7M |
MobileNetV2(alpha=0.75) | 30.468 | 10.824 | 9.188 | 2.7M | 1.4M |
MobileNetV2(alpha=1.0) | 28.664 | 9.858 | 8.322 | 3.5M | 2.3M |
MobileNetV2(alpha=1.3) | 25.320 | 7.878 | 6.728 | 5.4M | 3.8M |
以上就是python神經(jīng)網(wǎng)絡(luò)MobileNetV2模型的復(fù)現(xiàn)詳解的詳細內(nèi)容,更多關(guān)于MobileNetV2模型復(fù)現(xiàn)的資料請關(guān)注腳本之家其它相關(guān)文章!
相關(guān)文章
在Windows系統(tǒng)上搭建Nginx+Python+MySQL環(huán)境的教程
這篇文章主要介紹了在Windows系統(tǒng)上搭建Nginx+Python+MySQL環(huán)境的教程,文中使用flup中間件及FastCGI方式連接,需要的朋友可以參考下2015-12-12python實現(xiàn)beta分布概率密度函數(shù)的方法
今天小編就為大家分享一篇python實現(xiàn)beta分布概率密度函數(shù)的方法,具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧2019-07-07python 接口實現(xiàn) 供第三方調(diào)用的例子
今天小編就為大家分享一篇python 接口實現(xiàn) 供第三方調(diào)用的例子,具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧2019-08-08