Keras神經(jīng)網(wǎng)絡efficientnet模型搭建yolov3目標檢測平臺
什么是EfficientNet模型
2019年,谷歌新出EfficientNet,在其它網(wǎng)絡的基礎上,大幅度的縮小了參數(shù)的同時提高了預測準確度,簡直太強了,我這樣的強者也要跟著做下去
EfficientNet,網(wǎng)絡如其名,這個網(wǎng)絡非常的有效率,怎么理解有效率這個詞呢,我們從卷積神經(jīng)網(wǎng)絡的發(fā)展來看:
從最初的VGG16發(fā)展到如今的Xception,人們慢慢發(fā)現(xiàn),提高神經(jīng)網(wǎng)絡的性能不僅僅在于堆疊層數(shù),更重要的幾點是:
1、網(wǎng)絡要可以訓練,可以收斂。
2、參數(shù)量要比較小,方便訓練,提高速度。
3、創(chuàng)新神經(jīng)網(wǎng)絡的結構,學到更重要的東西。
而EfficientNet很好的做到了這一點,它利用更少的參數(shù)量(關系到訓練、速度)得到最好的識別度(學到更重要的特點)。
源碼下載
https://github.com/bubbliiiing/efficientnet-yolo3-keras
EfficientNet模型的實現(xiàn)思路
1、EfficientNet模型的特點
EfficientNet模型具有很獨特的特點,這個特點是參考其它優(yōu)秀神經(jīng)網(wǎng)絡設計出來的。經(jīng)典的神經(jīng)網(wǎng)絡特點如下:
1、利用殘差神經(jīng)網(wǎng)絡增大神經(jīng)網(wǎng)絡的深度,通過更深的神經(jīng)網(wǎng)絡實現(xiàn)特征提取。
2、改變每一層提取的特征層數(shù),實現(xiàn)更多層的特征提取,得到更多的特征,提升寬度。
3、通過增大輸入圖片的分辨率也可以使得網(wǎng)絡可以學習與表達的東西更加豐富,有利于提高精確度。
EfficientNet就是將這三個特點結合起來,通過一起縮放baseline模型(MobileNet中就通過縮放α實現(xiàn)縮放模型,不同的α有不同的模型精度,α=1時為baseline模型;
ResNet其實也是有一個baseline模型,在baseline的基礎上通過改變圖片的深度實現(xiàn)不同的模型實現(xiàn)),同時調(diào)整深度、寬度、輸入圖片的分辨率完成一個優(yōu)秀的網(wǎng)絡設計。
EfficientNet的效果如下:
在EfficientNet模型中,其使用一組固定的縮放系數(shù)統(tǒng)一縮放網(wǎng)絡深度、寬度和分辨率。
假設想使用 2N倍的計算資源,我們可以簡單的對網(wǎng)絡深度擴大αN倍、寬度擴大βN 、圖像尺寸擴大γN倍,這里的α,β,γ都是由原來的小模型上做微小的網(wǎng)格搜索決定的常量系數(shù)。
如圖為EfficientNet的設計思路,從三個方面同時拓充網(wǎng)絡的特性。
2、EfficientNet網(wǎng)絡的結構
EfficientNet一共由一個Stem + 16個Blocks + Con2D + GlobalAveragePooling2D + Dense組成,其核心內(nèi)容是16個Blocks,其它的結構與常規(guī)的卷積神經(jīng)網(wǎng)絡差距不大。
下圖展示的是EfficientNet-B0也就是EfficientNet的設計基線的結構:
第一部分是Stem,用于進行初步的特征提取,實際內(nèi)容是一個卷積+標準化+激活函數(shù)。
第二部分是16個Blocks,是efficientnet特有的特征提取結構,在Blocks堆疊的過程中完成高效的特征提取。
第三部分是Con2D + GlobalAveragePooling2D + Dense,是efficientnet的分類頭,在構建efficientnet-yolov3的時候沒有使用到。
整個efficientnet由7個部分的Block組成,對應上圖的Block1-Block7,其中每個部分的Block的的參數(shù)如下:
DEFAULT_BLOCKS_ARGS = [ {'kernel_size': 3, 'repeats': 1, 'filters_in': 32, 'filters_out': 16, 'expand_ratio': 1, 'id_skip': True, 'strides': 1, 'se_ratio': 0.25}, {'kernel_size': 3, 'repeats': 2, 'filters_in': 16, 'filters_out': 24, 'expand_ratio': 6, 'id_skip': True, 'strides': 2, 'se_ratio': 0.25}, {'kernel_size': 5, 'repeats': 2, 'filters_in': 24, 'filters_out': 40, 'expand_ratio': 6, 'id_skip': True, 'strides': 2, 'se_ratio': 0.25}, {'kernel_size': 3, 'repeats': 3, 'filters_in': 40, 'filters_out': 80, 'expand_ratio': 6, 'id_skip': True, 'strides': 2, 'se_ratio': 0.25}, {'kernel_size': 5, 'repeats': 3, 'filters_in': 80, 'filters_out': 112, 'expand_ratio': 6, 'id_skip': True, 'strides': 1, 'se_ratio': 0.25}, {'kernel_size': 5, 'repeats': 4, 'filters_in': 112, 'filters_out': 192, 'expand_ratio': 6, 'id_skip': True, 'strides': 2, 'se_ratio': 0.25}, {'kernel_size': 3, 'repeats': 1, 'filters_in': 192, 'filters_out': 320, 'expand_ratio': 6, 'id_skip': True, 'strides': 1, 'se_ratio': 0.25} ]
Block的通用結構如下,其總體的設計思路是一個結合深度可分離卷積和注意力機制的逆殘差結構,每個Block可分為兩部分:
左邊為主干部分,首先利用1x1卷積升維,再使用3x3或者5x5的逐層卷積進行跨特征點的特征提取。完成特征提取后添加一個通道注意力機制,最后利用1x1卷積降維。
右邊為殘差邊,不進行處理。
Block實現(xiàn)代碼如下:
#-------------------------------------------------# # efficient_block #-------------------------------------------------# def block(inputs, activation_fn=tf.nn.swish, drop_rate=0., name='', filters_in=32, filters_out=16, kernel_size=3, strides=1, expand_ratio=1, se_ratio=0., id_skip=True): filters = filters_in * expand_ratio #-------------------------------------------------# # 利用Inverted residuals # part1 利用1x1卷積進行通道數(shù)上升 #-------------------------------------------------# if expand_ratio != 1: x = layers.Conv2D(filters, 1, padding='same', use_bias=False, kernel_initializer=CONV_KERNEL_INITIALIZER, name=name + 'expand_conv')(inputs) x = layers.BatchNormalization(axis=3, name=name + 'expand_bn')(x) x = layers.Activation(activation_fn, name=name + 'expand_activation')(x) else: x = inputs #------------------------------------------------------# # 如果步長為2x2的話,利用深度可分離卷積進行高寬壓縮 # part2 利用3x3卷積對每一個channel進行卷積 #------------------------------------------------------# if strides == 2: x = layers.ZeroPadding2D(padding=correct_pad(x, kernel_size), name=name + 'dwconv_pad')(x) conv_pad = 'valid' else: conv_pad = 'same' x = layers.DepthwiseConv2D(kernel_size, strides=strides, padding=conv_pad, use_bias=False, depthwise_initializer=CONV_KERNEL_INITIALIZER, name=name + 'dwconv')(x) x = layers.BatchNormalization(axis=3, name=name + 'bn')(x) x = layers.Activation(activation_fn, name=name + 'activation')(x) #------------------------------------------------------# # 完成深度可分離卷積后 # 對深度可分離卷積的結果施加注意力機制 #------------------------------------------------------# if 0 < se_ratio <= 1: filters_se = max(1, int(filters_in * se_ratio)) se = layers.GlobalAveragePooling2D(name=name + 'se_squeeze')(x) se = layers.Reshape((1, 1, filters), name=name + 'se_reshape')(se) #------------------------------------------------------# # 通道先壓縮后上升,最后利用sigmoid將值固定到0-1之間 #------------------------------------------------------# se = layers.Conv2D(filters_se, 1, padding='same', activation=activation_fn, kernel_initializer=CONV_KERNEL_INITIALIZER, name=name + 'se_reduce')(se) se = layers.Conv2D(filters, 1, padding='same', activation='sigmoid', kernel_initializer=CONV_KERNEL_INITIALIZER, name=name + 'se_expand')(se) x = layers.multiply([x, se], name=name + 'se_excite') #------------------------------------------------------# # part3 利用1x1卷積進行通道下降 #------------------------------------------------------# x = layers.Conv2D(filters_out, 1, padding='same', use_bias=False, kernel_initializer=CONV_KERNEL_INITIALIZER, name=name + 'project_conv')(x) x = layers.BatchNormalization(axis=3, name=name + 'project_bn')(x) #------------------------------------------------------# # part4 如果滿足殘差條件,那么就增加殘差邊 #------------------------------------------------------# if (id_skip is True and strides == 1 and filters_in == filters_out): if drop_rate > 0: x = layers.Dropout(drop_rate, noise_shape=(None, 1, 1, 1), name=name + 'drop')(x) x = layers.add([x, inputs], name=name + 'add') return x
EfficientNet的代碼構建
1、模型代碼的構建
EfficientNet的實現(xiàn)代碼如下,該代碼是EfficientNet在YoloV3上的應用,可以參考一下:
import math from copy import deepcopy import tensorflow as tf from keras import backend, layers #-------------------------------------------------# # 一共七個大結構塊,每個大結構塊都有特定的參數(shù) #-------------------------------------------------# DEFAULT_BLOCKS_ARGS = [ {'kernel_size': 3, 'repeats': 1, 'filters_in': 32, 'filters_out': 16, 'expand_ratio': 1, 'id_skip': True, 'strides': 1, 'se_ratio': 0.25}, {'kernel_size': 3, 'repeats': 2, 'filters_in': 16, 'filters_out': 24, 'expand_ratio': 6, 'id_skip': True, 'strides': 2, 'se_ratio': 0.25}, {'kernel_size': 5, 'repeats': 2, 'filters_in': 24, 'filters_out': 40, 'expand_ratio': 6, 'id_skip': True, 'strides': 2, 'se_ratio': 0.25}, {'kernel_size': 3, 'repeats': 3, 'filters_in': 40, 'filters_out': 80, 'expand_ratio': 6, 'id_skip': True, 'strides': 2, 'se_ratio': 0.25}, {'kernel_size': 5, 'repeats': 3, 'filters_in': 80, 'filters_out': 112, 'expand_ratio': 6, 'id_skip': True, 'strides': 1, 'se_ratio': 0.25}, {'kernel_size': 5, 'repeats': 4, 'filters_in': 112, 'filters_out': 192, 'expand_ratio': 6, 'id_skip': True, 'strides': 2, 'se_ratio': 0.25}, {'kernel_size': 3, 'repeats': 1, 'filters_in': 192, 'filters_out': 320, 'expand_ratio': 6, 'id_skip': True, 'strides': 1, 'se_ratio': 0.25} ] #-------------------------------------------------# # Kernel的初始化器 #-------------------------------------------------# CONV_KERNEL_INITIALIZER = { 'class_name': 'VarianceScaling', 'config': { 'scale': 2.0, 'mode': 'fan_out', 'distribution': 'normal' } } #-------------------------------------------------# # 用于計算卷積層的padding的大小 #-------------------------------------------------# def correct_pad(inputs, kernel_size): img_dim = 1 input_size = backend.int_shape(inputs)[img_dim:(img_dim + 2)] if isinstance(kernel_size, int): kernel_size = (kernel_size, kernel_size) if input_size[0] is None: adjust = (1, 1) else: adjust = (1 - input_size[0] % 2, 1 - input_size[1] % 2) correct = (kernel_size[0] // 2, kernel_size[1] // 2) return ((correct[0] - adjust[0], correct[0]), (correct[1] - adjust[1], correct[1])) #-------------------------------------------------# # 該函數(shù)的目的是保證filter的大小可以被8整除 #-------------------------------------------------# def round_filters(filters, divisor, width_coefficient): filters *= width_coefficient new_filters = max(divisor, int(filters + divisor / 2) // divisor * divisor) if new_filters < 0.9 * filters: new_filters += divisor return int(new_filters) #-------------------------------------------------# # 計算模塊的重復次數(shù) #-------------------------------------------------# def round_repeats(repeats, depth_coefficient): return int(math.ceil(depth_coefficient * repeats)) #-------------------------------------------------# # efficient_block #-------------------------------------------------# def block(inputs, activation_fn=tf.nn.swish, drop_rate=0., name='', filters_in=32, filters_out=16, kernel_size=3, strides=1, expand_ratio=1, se_ratio=0., id_skip=True): filters = filters_in * expand_ratio #-------------------------------------------------# # 利用Inverted residuals # part1 利用1x1卷積進行通道數(shù)上升 #-------------------------------------------------# if expand_ratio != 1: x = layers.Conv2D(filters, 1, padding='same', use_bias=False, kernel_initializer=CONV_KERNEL_INITIALIZER, name=name + 'expand_conv')(inputs) x = layers.BatchNormalization(axis=3, name=name + 'expand_bn')(x) x = layers.Activation(activation_fn, name=name + 'expand_activation')(x) else: x = inputs #------------------------------------------------------# # 如果步長為2x2的話,利用深度可分離卷積進行高寬壓縮 # part2 利用3x3卷積對每一個channel進行卷積 #------------------------------------------------------# if strides == 2: x = layers.ZeroPadding2D(padding=correct_pad(x, kernel_size), name=name + 'dwconv_pad')(x) conv_pad = 'valid' else: conv_pad = 'same' x = layers.DepthwiseConv2D(kernel_size, strides=strides, padding=conv_pad, use_bias=False, depthwise_initializer=CONV_KERNEL_INITIALIZER, name=name + 'dwconv')(x) x = layers.BatchNormalization(axis=3, name=name + 'bn')(x) x = layers.Activation(activation_fn, name=name + 'activation')(x) #------------------------------------------------------# # 完成深度可分離卷積后 # 對深度可分離卷積的結果施加注意力機制 #------------------------------------------------------# if 0 < se_ratio <= 1: filters_se = max(1, int(filters_in * se_ratio)) se = layers.GlobalAveragePooling2D(name=name + 'se_squeeze')(x) se = layers.Reshape((1, 1, filters), name=name + 'se_reshape')(se) #------------------------------------------------------# # 通道先壓縮后上升,最后利用sigmoid將值固定到0-1之間 #------------------------------------------------------# se = layers.Conv2D(filters_se, 1, padding='same', activation=activation_fn, kernel_initializer=CONV_KERNEL_INITIALIZER, name=name + 'se_reduce')(se) se = layers.Conv2D(filters, 1, padding='same', activation='sigmoid', kernel_initializer=CONV_KERNEL_INITIALIZER, name=name + 'se_expand')(se) x = layers.multiply([x, se], name=name + 'se_excite') #------------------------------------------------------# # part3 利用1x1卷積進行通道下降 #------------------------------------------------------# x = layers.Conv2D(filters_out, 1, padding='same', use_bias=False, kernel_initializer=CONV_KERNEL_INITIALIZER, name=name + 'project_conv')(x) x = layers.BatchNormalization(axis=3, name=name + 'project_bn')(x) #------------------------------------------------------# # part4 如果滿足殘差條件,那么就增加殘差邊 #------------------------------------------------------# if (id_skip is True and strides == 1 and filters_in == filters_out): if drop_rate > 0: x = layers.Dropout(drop_rate, noise_shape=(None, 1, 1, 1), name=name + 'drop')(x) x = layers.add([x, inputs], name=name + 'add') return x def EfficientNet(width_coefficient, depth_coefficient, drop_connect_rate=0.2, depth_divisor=8, activation_fn=tf.nn.swish, blocks_args=DEFAULT_BLOCKS_ARGS, inputs=None, **kwargs): img_input = inputs #-------------------------------------------------# # 創(chuàng)建stem部分 # 416,416,3 -> 208,208,32 #-------------------------------------------------# x = img_input x = layers.ZeroPadding2D(padding=correct_pad(x, 3), name='stem_conv_pad')(x) x = layers.Conv2D(round_filters(32, depth_divisor, width_coefficient), 3, strides=2, padding='valid', use_bias=False, kernel_initializer=CONV_KERNEL_INITIALIZER, name='stem_conv')(x) x = layers.BatchNormalization(axis=3, name='stem_bn')(x) x = layers.Activation(activation_fn, name='stem_activation')(x) #-------------------------------------------------# # 進行一個深度的copy #-------------------------------------------------# blocks_args = deepcopy(blocks_args) #-------------------------------------------------# # 計算總的efficient_block的數(shù)量 #-------------------------------------------------# b = 0 blocks = float(sum(args['repeats'] for args in blocks_args)) feats = [] filters_outs = [] #------------------------------------------------------------------------------# # 對結構塊參數(shù)進行循環(huán)、一共進行7個大的結構塊。 # 每個大結構塊下會重復小的efficient_block、每個大結構塊的shape變化為: # 208,208,32 -> 208,208,16 -> 104,104,24 -> 52,52,40 # -> 26,26,80 -> 26,26,112 -> 13,13,192 -> 13,13,320 # 輸入為208,208,32,最終獲得三個shape的有效特征層 # 104,104,24、26,26,112、13,13,320 #------------------------------------------------------------------------------# for (i, args) in enumerate(blocks_args): assert args['repeats'] > 0 args['filters_in'] = round_filters(args['filters_in'], depth_divisor, width_coefficient) args['filters_out'] = round_filters(args['filters_out'], depth_divisor, width_coefficient) for j in range(round_repeats(args.pop('repeats'), depth_coefficient)): if j > 0: args['strides'] = 1 args['filters_in'] = args['filters_out'] x = block(x, activation_fn, drop_connect_rate * b / blocks, name='block{}{}_'.format(i + 1, chr(j + 97)), **args) b += 1 feats.append(x) if i == 2 or i == 4 or i == 6: filters_outs.append(args['filters_out']) return feats, filters_outs def EfficientNetB0(inputs=None, **kwargs): return EfficientNet(1.0, 1.0, inputs=inputs, **kwargs) def EfficientNetB1(inputs=None, **kwargs): return EfficientNet(1.0, 1.1, inputs=inputs, **kwargs) def EfficientNetB2(inputs=None, **kwargs): return EfficientNet(1.1, 1.2, inputs=inputs, **kwargs) def EfficientNetB3(inputs=None, **kwargs): return EfficientNet(1.2, 1.4, inputs=inputs, **kwargs) def EfficientNetB4(inputs=None, **kwargs): return EfficientNet(1.4, 1.8, inputs=inputs, **kwargs) def EfficientNetB5(inputs=None, **kwargs): return EfficientNet(1.6, 2.2, inputs=inputs, **kwargs) def EfficientNetB6(inputs=None, **kwargs): return EfficientNet(1.8, 2.6, inputs=inputs, **kwargs) def EfficientNetB7(inputs=None, **kwargs): return EfficientNet(2.0, 3.1, inputs=inputs, **kwargs)
2、Yolov3上的應用
對于yolov3來講,我們需要利用主干特征提取網(wǎng)絡獲得的三個有效特征進行加強特征金字塔的構建。
我們通過上述代碼可以取出三個有效特征層,我們可以利用這三個有效特征層替換原來yolov3主干網(wǎng)絡darknet53的有效特征層。
為了進一步減少參數(shù)量,我們減少了yolov3中用到的普通卷積的通道數(shù)。
最終EfficientNet-YoloV3的構建代碼如下:
from functools import wraps from keras.initializers import random_normal from keras.layers import (BatchNormalization, Concatenate, Conv2D, Input, Lambda, LeakyReLU, UpSampling2D) from keras.models import Model from keras.regularizers import l2 from utils.utils import compose from nets.efficientnet import (EfficientNetB0, EfficientNetB1, EfficientNetB2, EfficientNetB3, EfficientNetB4, EfficientNetB5, EfficientNetB6, EfficientNetB7) from nets.yolo_training import yolo_loss Efficient = [EfficientNetB0,EfficientNetB1,EfficientNetB2,EfficientNetB3,EfficientNetB4,EfficientNetB5,EfficientNetB6,EfficientNetB7] #------------------------------------------------------# # 單次卷積DarknetConv2D # 如果步長為2則自己設定padding方式。 # 測試中發(fā)現(xiàn)沒有l(wèi)2正則化效果更好,所以去掉了l2正則化 #------------------------------------------------------# @wraps(Conv2D) def DarknetConv2D(*args, **kwargs): darknet_conv_kwargs = {'kernel_initializer' : random_normal(stddev=0.02), 'kernel_regularizer': l2(5e-4)} darknet_conv_kwargs['padding'] = 'valid' if kwargs.get('strides')==(2,2) else 'same' darknet_conv_kwargs.update(kwargs) return Conv2D(*args, **darknet_conv_kwargs) #---------------------------------------------------# # 卷積塊 -> 卷積 + 標準化 + 激活函數(shù) # DarknetConv2D + BatchNormalization + LeakyReLU #---------------------------------------------------# def DarknetConv2D_BN_Leaky(*args, **kwargs): no_bias_kwargs = {'use_bias': False} no_bias_kwargs.update(kwargs) return compose( DarknetConv2D(*args, **no_bias_kwargs), BatchNormalization(), LeakyReLU(alpha=0.1)) #---------------------------------------------------# # 特征層->最后的輸出 #---------------------------------------------------# def make_five_conv(x, num_filters): x = DarknetConv2D_BN_Leaky(num_filters, (1,1))(x) x = DarknetConv2D_BN_Leaky(num_filters*2, (3,3))(x) x = DarknetConv2D_BN_Leaky(num_filters, (1,1))(x) x = DarknetConv2D_BN_Leaky(num_filters*2, (3,3))(x) x = DarknetConv2D_BN_Leaky(num_filters, (1,1))(x) return x def make_yolo_head(x, num_filters, out_filters): y = DarknetConv2D_BN_Leaky(num_filters*2, (3,3))(x) y = DarknetConv2D(out_filters, (1,1))(y) return y #---------------------------------------------------# # FPN網(wǎng)絡的構建,并且獲得預測結果 #---------------------------------------------------# def yolo_body(input_shape, anchors_mask, num_classes, phi = 0): inputs = Input(input_shape) #---------------------------------------------------# # 生成darknet53的主干模型 # 獲得三個有效特征層,他們的shape分別是: # C3 為 52,52,256 # C4 為 26,26,512 # C5 為 13,13,1024 #---------------------------------------------------# feats, filters_outs = Efficient[phi](inputs = inputs) feat1 = feats[2] feat2 = feats[4] feat3 = feats[6] #---------------------------------------------------# # 第一個特征層 # y1=(batch_size,13,13,3,85) #---------------------------------------------------# # 13,13,1024 -> 13,13,512 -> 13,13,1024 -> 13,13,512 -> 13,13,1024 -> 13,13,512 x = make_five_conv(feat3, int(filters_outs[2])) P5 = make_yolo_head(x, int(filters_outs[2]), len(anchors_mask[0]) * (num_classes+5)) # 13,13,512 -> 13,13,256 -> 26,26,256 x = compose(DarknetConv2D_BN_Leaky(int(filters_outs[1]), (1,1)), UpSampling2D(2))(x) # 26,26,256 + 26,26,512 -> 26,26,768 x = Concatenate()([x, feat2]) #---------------------------------------------------# # 第二個特征層 # y2=(batch_size,26,26,3,85) #---------------------------------------------------# # 26,26,768 -> 26,26,256 -> 26,26,512 -> 26,26,256 -> 26,26,512 -> 26,26,256 x = make_five_conv(x, int(filters_outs[1])) P4 = make_yolo_head(x, int(filters_outs[1]), len(anchors_mask[1]) * (num_classes+5)) # 26,26,256 -> 26,26,128 -> 52,52,128 x = compose(DarknetConv2D_BN_Leaky(int(filters_outs[0]), (1,1)), UpSampling2D(2))(x) # 52,52,128 + 52,52,256 -> 52,52,384 x = Concatenate()([x, feat1]) #---------------------------------------------------# # 第三個特征層 # y3=(batch_size,52,52,3,85) #---------------------------------------------------# # 52,52,384 -> 52,52,128 -> 52,52,256 -> 52,52,128 -> 52,52,256 -> 52,52,128 x = make_five_conv(x, int(filters_outs[0])) P3 = make_yolo_head(x, int(filters_outs[0]), len(anchors_mask[2]) * (num_classes+5)) return Model(inputs, [P5, P4, P3])
以上就是Keras利用efficientnet系列模型搭建yolov3目標檢測平臺的詳細內(nèi)容,更多關于efficientnet搭建yolov3目標檢測的資料請關注腳本之家其它相關文章!
相關文章
Python實現(xiàn)爬取需要登錄的網(wǎng)站完整示例
這篇文章主要介紹了Python實現(xiàn)爬取需要登錄的網(wǎng)站,結合完整實例形式分析了Python登陸網(wǎng)站及數(shù)據(jù)抓取相關操作技巧,需要的朋友可以參考下2017-08-08Python使用pyenv實現(xiàn)多環(huán)境管理
這篇文章主要介紹了Python使用pyenv實現(xiàn)多環(huán)境管理,文中通過示例代碼介紹的非常詳細,對大家的學習或者工作具有一定的參考學習價值,需要的朋友們下面隨著小編來一起學習學習吧2021-02-02