python神經(jīng)網(wǎng)絡ShuffleNetV2模型復現(xiàn)詳解
什么是ShuffleNetV2
據(jù)說ShuffleNetV2比Mobilenet還要厲害,我決定好好學一下
這篇是ECCV2018關于輕量級模型的文章。
目前大部分的輕量級模型在對比模型速度時用的指標是FLOPs,這個指標主要衡量的就是卷積層的乘法操作。
但是實際運用中會發(fā)現(xiàn),同一個FLOPS的網(wǎng)絡運算速度卻不同,只用FLOPS去進行衡量的話并不能完全代表模型速度。
通過如下圖所示對比,作者發(fā)現(xiàn)Elemwise/Data IO等內存讀寫密集型操作也會極大的影響模型運算速度。
結合理論與實驗作者提出了4條實用的指導原則:
1、卷積層的輸入和輸出特征通道數(shù)相等時MAC最小,此時模型速度最快。
2、過量使用組卷積會增加MAC。
3、網(wǎng)絡碎片化會降低并行度。
4、不能忽略元素級操作,比如ReLU和Add,雖然它們的FLOPs較小,但是卻需要較大的MAC。
ShuffleNetV2
1、所用模塊
如圖所示是ShuffleNetV2所常用的兩個模塊:
1、當Stride==1的時候,采用左邊的模塊,由于殘差邊沒有卷積,因此寬高不變,主要用于加深網(wǎng)絡層數(shù)。
2、當Stride==2的時候,采用右邊的模塊,由于殘差邊有卷積,因此寬高可變,主要用于壓縮特征層的寬高,進行下采樣。
模塊實現(xiàn)代碼如下:
def channel_split(x, name=''): # 輸入進來的通道數(shù) in_channles = x.shape.as_list()[-1] ip = in_channles // 2 # 對通道數(shù)進行分割 c_hat = Lambda(lambda z: z[:, :, :, 0:ip], name='%s/sp%d_slice' % (name, 0))(x) c = Lambda(lambda z: z[:, :, :, ip:], name='%s/sp%d_slice' % (name, 1))(x) return c_hat, c def channel_shuffle(x): height, width, channels = x.shape.as_list()[1:] channels_per_split = channels // 2 # 通道交換 x = K.reshape(x, [-1, height, width, 2, channels_per_split]) x = K.permute_dimensions(x, (0,1,2,4,3)) x = K.reshape(x, [-1, height, width, channels]) return x def shuffle_unit(inputs, out_channels, bottleneck_ratio, strides=2, stage=1, block=1): bn_axis = -1 prefix = 'stage{}/block{}'.format(stage, block) # [116, 232, 464] bottleneck_channels = int(out_channels * bottleneck_ratio/2) if strides < 2: c_hat, c = channel_split(inputs, '{}/spl'.format(prefix)) inputs = c # [116, 232, 464] x = Conv2D(bottleneck_channels, kernel_size=(1,1), strides=1, padding='same', name='{}/1x1conv_1'.format(prefix))(inputs) x = BatchNormalization(axis=bn_axis, name='{}/bn_1x1conv_1'.format(prefix))(x) x = Activation('relu', name='{}/relu_1x1conv_1'.format(prefix))(x) # 深度可分離卷積 x = DepthwiseConv2D(kernel_size=3, strides=strides, padding='same', name='{}/3x3dwconv'.format(prefix))(x) x = BatchNormalization(axis=bn_axis, name='{}/bn_3x3dwconv'.format(prefix))(x) # [116, 232, 464] x = Conv2D(bottleneck_channels, kernel_size=1,strides=1,padding='same', name='{}/1x1conv_2'.format(prefix))(x) x = BatchNormalization(axis=bn_axis, name='{}/bn_1x1conv_2'.format(prefix))(x) x = Activation('relu', name='{}/relu_1x1conv_2'.format(prefix))(x) # 當strides等于2的時候,殘差邊需要添加卷積 if strides < 2: ret = Concatenate(axis=bn_axis, name='{}/concat_1'.format(prefix))([x, c_hat]) else: s2 = DepthwiseConv2D(kernel_size=3, strides=2, padding='same', name='{}/3x3dwconv_2'.format(prefix))(inputs) s2 = BatchNormalization(axis=bn_axis, name='{}/bn_3x3dwconv_2'.format(prefix))(s2) s2 = Conv2D(bottleneck_channels, kernel_size=1,strides=1,padding='same', name='{}/1x1_conv_3'.format(prefix))(s2) s2 = BatchNormalization(axis=bn_axis, name='{}/bn_1x1conv_3'.format(prefix))(s2) s2 = Activation('relu', name='{}/relu_1x1conv_3'.format(prefix))(s2) ret = Concatenate(axis=bn_axis, name='{}/concat_2'.format(prefix))([x, s2]) ret = Lambda(channel_shuffle, name='{}/channel_shuffle'.format(prefix))(ret) return ret def block(x, channel_map, bottleneck_ratio, repeat=1, stage=1): x = shuffle_unit(x, out_channels=channel_map[stage-1], strides=2,bottleneck_ratio=bottleneck_ratio,stage=stage,block=1) for i in range(1, repeat+1): x = shuffle_unit(x, out_channels=channel_map[stage-1],strides=1, bottleneck_ratio=bottleneck_ratio,stage=stage, block=(1+i)) return x
2、網(wǎng)絡整體結構
網(wǎng)絡整體結構如圖所示:
1、當輸入進來的圖片為224,224,3的時候,會經(jīng)過一次卷積壓縮+一次最大池化,此時網(wǎng)絡的shape由224,224,3->112,112,24->56,56,24。
2、經(jīng)過一次右邊的ShuffleNet模塊后進行三次左邊的ShuffleNet模塊。此時網(wǎng)絡的shape由56,56,24->28,28,116。
3、經(jīng)過一次右邊的ShuffleNet模塊后進行七次左邊的ShuffleNet模塊。此時網(wǎng)絡的shape由28,28,116->14,14,232。
4、經(jīng)過一次右邊的ShuffleNet模塊后進行三次左邊的ShuffleNet模塊。此時網(wǎng)絡的shape由14,14,232->7,7,464。
5、卷積到1024,此時網(wǎng)絡的shape由7,7,464->7,7,1024。
6、全局池化后,進行全連接,用于預測。
網(wǎng)絡實現(xiàn)代碼
ShuffleNetV2一共有4個scale,分別對應不同大小的ShuffleNetV2。
import numpy as np from keras.utils import plot_model from keras.layers import Input, Conv2D, MaxPool2D from keras.layers import Activation, Add, Concatenate, Conv2D from keras.layers import GlobalAveragePooling2D, Dense from keras.layers import MaxPool2D,AveragePooling2D, BatchNormalization, Lambda, DepthwiseConv2D from keras.models import Model import keras.backend as K import numpy as np def channel_split(x, name=''): # 輸入進來的通道數(shù) in_channles = x.shape.as_list()[-1] ip = in_channles // 2 # 對通道數(shù)進行分割 c_hat = Lambda(lambda z: z[:, :, :, 0:ip], name='%s/sp%d_slice' % (name, 0))(x) c = Lambda(lambda z: z[:, :, :, ip:], name='%s/sp%d_slice' % (name, 1))(x) return c_hat, c def channel_shuffle(x): height, width, channels = x.shape.as_list()[1:] channels_per_split = channels // 2 # 通道交換 x = K.reshape(x, [-1, height, width, 2, channels_per_split]) x = K.permute_dimensions(x, (0,1,2,4,3)) x = K.reshape(x, [-1, height, width, channels]) return x def shuffle_unit(inputs, out_channels, bottleneck_ratio, strides=2, stage=1, block=1): bn_axis = -1 prefix = 'stage{}/block{}'.format(stage, block) # [116, 232, 464] bottleneck_channels = int(out_channels * bottleneck_ratio/2) if strides < 2: c_hat, c = channel_split(inputs, '{}/spl'.format(prefix)) inputs = c # [116, 232, 464] x = Conv2D(bottleneck_channels, kernel_size=(1,1), strides=1, padding='same', name='{}/1x1conv_1'.format(prefix))(inputs) x = BatchNormalization(axis=bn_axis, name='{}/bn_1x1conv_1'.format(prefix))(x) x = Activation('relu', name='{}/relu_1x1conv_1'.format(prefix))(x) # 深度可分離卷積 x = DepthwiseConv2D(kernel_size=3, strides=strides, padding='same', name='{}/3x3dwconv'.format(prefix))(x) x = BatchNormalization(axis=bn_axis, name='{}/bn_3x3dwconv'.format(prefix))(x) # [116, 232, 464] x = Conv2D(bottleneck_channels, kernel_size=1,strides=1,padding='same', name='{}/1x1conv_2'.format(prefix))(x) x = BatchNormalization(axis=bn_axis, name='{}/bn_1x1conv_2'.format(prefix))(x) x = Activation('relu', name='{}/relu_1x1conv_2'.format(prefix))(x) # 當strides等于2的時候,殘差邊需要添加卷積 if strides < 2: ret = Concatenate(axis=bn_axis, name='{}/concat_1'.format(prefix))([x, c_hat]) else: s2 = DepthwiseConv2D(kernel_size=3, strides=2, padding='same', name='{}/3x3dwconv_2'.format(prefix))(inputs) s2 = BatchNormalization(axis=bn_axis, name='{}/bn_3x3dwconv_2'.format(prefix))(s2) s2 = Conv2D(bottleneck_channels, kernel_size=1,strides=1,padding='same', name='{}/1x1_conv_3'.format(prefix))(s2) s2 = BatchNormalization(axis=bn_axis, name='{}/bn_1x1conv_3'.format(prefix))(s2) s2 = Activation('relu', name='{}/relu_1x1conv_3'.format(prefix))(s2) ret = Concatenate(axis=bn_axis, name='{}/concat_2'.format(prefix))([x, s2]) ret = Lambda(channel_shuffle, name='{}/channel_shuffle'.format(prefix))(ret) return ret def block(x, channel_map, bottleneck_ratio, repeat=1, stage=1): x = shuffle_unit(x, out_channels=channel_map[stage-1], strides=2,bottleneck_ratio=bottleneck_ratio,stage=stage,block=1) for i in range(1, repeat+1): x = shuffle_unit(x, out_channels=channel_map[stage-1],strides=1, bottleneck_ratio=bottleneck_ratio,stage=stage, block=(1+i)) return x def ShuffleNetV2(input_tensor=None, pooling='max', input_shape=(224,224,3), num_shuffle_units=[3,7,3], scale_factor=1, bottleneck_ratio=1, classes=1000): name = 'ShuffleNetV2_{}_{}_{}'.format(scale_factor, bottleneck_ratio, "".join([str(x) for x in num_shuffle_units])) out_dim_stage_two = {0.5:48, 1:116, 1.5:176, 2:244} out_channels_in_stage = np.array([1,1,2,4]) out_channels_in_stage *= out_dim_stage_two[scale_factor] # calculate output channels for each stage out_channels_in_stage[0] = 24 # first stage has always 24 output channels out_channels_in_stage = out_channels_in_stage.astype(int) img_input = Input(shape=input_shape) x = Conv2D(filters=out_channels_in_stage[0], kernel_size=(3, 3), padding='same', use_bias=False, strides=(2, 2), activation='relu', name='conv1')(img_input) x = MaxPool2D(pool_size=(3, 3), strides=(2, 2), padding='same', name='maxpool1')(x) for stage in range(len(num_shuffle_units)): repeat = num_shuffle_units[stage] x = block(x, out_channels_in_stage, repeat=repeat, bottleneck_ratio=bottleneck_ratio, stage=stage + 2) if scale_factor!=2: x = Conv2D(1024, kernel_size=1, padding='same', strides=1, name='1x1conv5_out', activation='relu')(x) else: x = Conv2D(2048, kernel_size=1, padding='same', strides=1, name='1x1conv5_out', activation='relu')(x) x = GlobalAveragePooling2D(name='global_avg_pool')(x) x = Dense(classes, name='fc')(x) x = Activation('softmax', name='softmax')(x) inputs = img_input model = Model(inputs, x, name=name) return model if __name__ == '__main__': import os os.environ['CUDA_VISIBLE_DEVICES'] = '' model = ShuffleNetV2(input_shape=(224, 224, 3),scale_factor=1) model.summary()
以上就是python神經(jīng)網(wǎng)絡ShuffleNetV2模型復現(xiàn)詳解的詳細內容,更多關于ShuffleNetV2模型復現(xiàn)的資料請關注腳本之家其它相關文章!
- Python機器學習利用鳶尾花數(shù)據(jù)繪制ROC和AUC曲線
- 利用Python畫ROC曲線和AUC值計算
- 一文詳解Python灰色預測模型實現(xiàn)示例
- python回歸分析邏輯斯蒂模型之多分類任務詳解
- python深度學習tensorflow訓練好的模型進行圖像分類
- Python實現(xiàn)自動駕駛訓練模型
- python神經(jīng)網(wǎng)絡Keras?GhostNet模型的實現(xiàn)
- python神經(jīng)網(wǎng)絡Densenet模型復現(xiàn)詳解
- python神經(jīng)網(wǎng)絡MobileNetV3?small模型的復現(xiàn)詳解
- python神經(jīng)網(wǎng)絡MobileNetV3?large模型的復現(xiàn)詳解
- python神經(jīng)網(wǎng)絡Inception?ResnetV2模型復現(xiàn)詳解
- python模型性能ROC和AUC分析詳解
相關文章
keras分類模型中的輸入數(shù)據(jù)與標簽的維度實例
這篇文章主要介紹了keras分類模型中的輸入數(shù)據(jù)與標簽的維度實例,具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧2020-07-07Python圖像處理之gif動態(tài)圖的解析與合成操作詳解
這篇文章主要介紹了Python圖像處理之gif動態(tài)圖的解析與合成操作,結合實例形式分析了Python基于PIL模塊解析gif文件,以及基于imageio庫合成gif文件的相關操作技巧,需要的朋友可以參考下2018-12-12淺談Pandas Series 和 Numpy array中的相同點
今天小編就為大家分享一篇淺談Pandas Series 和 Numpy array中的相同點,具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧2019-06-06Python-numpy實現(xiàn)灰度圖像的分塊和合并方式
今天小編就為大家分享一篇Python-numpy實現(xiàn)灰度圖像的分塊和合并方式,具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧2020-01-01