python目標(biāo)檢測(cè)yolo3詳解預(yù)測(cè)及代碼復(fù)現(xiàn)
學(xué)習(xí)前言
對(duì)yolo2解析完了之后當(dāng)然要講講yolo3,yolo3與yolo2的差別主要在網(wǎng)絡(luò)的特征提取部分,實(shí)際的解碼部分其實(shí)差距不大
代碼下載
本次教程主要基于github中的項(xiàng)目點(diǎn)擊直接下載,該項(xiàng)目相比于yolo3-Keras的項(xiàng)目更容易看懂一些,不過(guò)它的許多代碼與yolo3-Keras相同。
我保留了預(yù)測(cè)部分的代碼,在實(shí)際可以通過(guò)執(zhí)行detect.py運(yùn)行示例。
鏈接: https://pan.baidu.com/s/1_xLeytnjBBSL2h2Kj4-YEw
取碼:i3hi
實(shí)現(xiàn)思路
1、yolo3的預(yù)測(cè)思路(網(wǎng)絡(luò)構(gòu)建思路)
YOLOv3相比于之前的yolo1和yolo2,改進(jìn)較大,主要改進(jìn)方向有:
1、使用了殘差網(wǎng)絡(luò)Residual,殘差卷積就是進(jìn)行一次3X3、步長(zhǎng)為2的卷積,然后保存該卷積layer,再進(jìn)行一次1X1的卷積和一次3X3的卷積,并把這個(gè)結(jié)果加上layer作為最后的結(jié)果, 殘差網(wǎng)絡(luò)的特點(diǎn)是容易優(yōu)化,并且能夠通過(guò)增加相當(dāng)?shù)纳疃葋?lái)提高準(zhǔn)確率。其內(nèi)部的殘差塊使用了跳躍連接,緩解了在深度神經(jīng)網(wǎng)絡(luò)中增加深度帶來(lái)的梯度消失問(wèn)題。
2、提取多特征層進(jìn)行目標(biāo)檢測(cè),一共提取三個(gè)特征層,特征層的shape分別為(13,13,75),(26,26,75),(52,52,75),最后一個(gè)維度為75是因?yàn)樵搱D是基于voc數(shù)據(jù)集的,它的類(lèi)為20種,yolo3只有針對(duì)每一個(gè)特征層存在3個(gè)先驗(yàn)框,所以最后維度為3x25;
如果使用的是coco訓(xùn)練集,類(lèi)則為80種,最后的維度應(yīng)該為255 = 3x85,三個(gè)特征層的shape為(13,13,255),(26,26,255),(52,52,255)
3、其采用反卷積UmSampling2d設(shè)計(jì),逆卷積相對(duì)于卷積在神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu)的正向和反向傳播中做相反的運(yùn)算,其可以更多更好的提取出特征。
其實(shí)際情況就是,輸入N張416x416的圖片,在經(jīng)過(guò)多層的運(yùn)算后,會(huì)輸出三個(gè)shape分別為(N,13,13,255),(N,26,26,255),(N,52,52,255)的數(shù)據(jù),對(duì)應(yīng)每個(gè)圖分為13x13、26x26、52x52的網(wǎng)格上3個(gè)先驗(yàn)框的位置。
實(shí)現(xiàn)代碼如下,其在實(shí)際調(diào)用時(shí),會(huì)調(diào)用其中的yolo_inference函數(shù),此時(shí)獲得三個(gè)特征層的內(nèi)容。
def _darknet53(self, inputs, conv_index, training = True, norm_decay = 0.99, norm_epsilon = 1e-3): """ Introduction ------------ 構(gòu)建yolo3使用的darknet53網(wǎng)絡(luò)結(jié)構(gòu) Parameters ---------- inputs: 模型輸入變量 conv_index: 卷積層數(shù)序號(hào),方便根據(jù)名字加載預(yù)訓(xùn)練權(quán)重 weights_dict: 預(yù)訓(xùn)練權(quán)重 training: 是否為訓(xùn)練 norm_decay: 在預(yù)測(cè)時(shí)計(jì)算moving average時(shí)的衰減率 norm_epsilon: 方差加上極小的數(shù),防止除以0的情況 Returns ------- conv: 經(jīng)過(guò)52層卷積計(jì)算之后的結(jié)果, 輸入圖片為416x416x3,則此時(shí)輸出的結(jié)果shape為13x13x1024 route1: 返回第26層卷積計(jì)算結(jié)果52x52x256, 供后續(xù)使用 route2: 返回第43層卷積計(jì)算結(jié)果26x26x512, 供后續(xù)使用 conv_index: 卷積層計(jì)數(shù),方便在加載預(yù)訓(xùn)練模型時(shí)使用 """ with tf.variable_scope('darknet53'): # 416,416,3 -> 416,416,32 conv = self._conv2d_layer(inputs, filters_num = 32, kernel_size = 3, strides = 1, name = "conv2d_" + str(conv_index)) conv = self._batch_normalization_layer(conv, name = "batch_normalization_" + str(conv_index), training = training, norm_decay = norm_decay, norm_epsilon = norm_epsilon) conv_index += 1 # 416,416,32 -> 208,208,64 conv, conv_index = self._Residual_block(conv, conv_index = conv_index, filters_num = 64, blocks_num = 1, training = training, norm_decay = norm_decay, norm_epsilon = norm_epsilon) # 208,208,64 -> 104,104,128 conv, conv_index = self._Residual_block(conv, conv_index = conv_index, filters_num = 128, blocks_num = 2, training = training, norm_decay = norm_decay, norm_epsilon = norm_epsilon) # 104,104,128 -> 52,52,256 conv, conv_index = self._Residual_block(conv, conv_index = conv_index, filters_num = 256, blocks_num = 8, training = training, norm_decay = norm_decay, norm_epsilon = norm_epsilon) # route1 = 52,52,256 route1 = conv # 52,52,256 -> 26,26,512 conv, conv_index = self._Residual_block(conv, conv_index = conv_index, filters_num = 512, blocks_num = 8, training = training, norm_decay = norm_decay, norm_epsilon = norm_epsilon) # route2 = 26,26,512 route2 = conv # 26,26,512 -> 13,13,1024 conv, conv_index = self._Residual_block(conv, conv_index = conv_index, filters_num = 1024, blocks_num = 4, training = training, norm_decay = norm_decay, norm_epsilon = norm_epsilon) # route3 = 13,13,1024 return route1, route2, conv, conv_index # 輸出兩個(gè)網(wǎng)絡(luò)結(jié)果 # 第一個(gè)是進(jìn)行5次卷積后,用于下一次逆卷積的,卷積過(guò)程是1X1,3X3,1X1,3X3,1X1 # 第二個(gè)是進(jìn)行5+2次卷積,作為一個(gè)特征層的,卷積過(guò)程是1X1,3X3,1X1,3X3,1X1,3X3,1X1 def _yolo_block(self, inputs, filters_num, out_filters, conv_index, training = True, norm_decay = 0.99, norm_epsilon = 1e-3): """ Introduction ------------ yolo3在Darknet53提取的特征層基礎(chǔ)上,又加了針對(duì)3種不同比例的feature map的block,這樣來(lái)提高對(duì)小物體的檢測(cè)率 Parameters ---------- inputs: 輸入特征 filters_num: 卷積核數(shù)量 out_filters: 最后輸出層的卷積核數(shù)量 conv_index: 卷積層數(shù)序號(hào),方便根據(jù)名字加載預(yù)訓(xùn)練權(quán)重 training: 是否為訓(xùn)練 norm_decay: 在預(yù)測(cè)時(shí)計(jì)算moving average時(shí)的衰減率 norm_epsilon: 方差加上極小的數(shù),防止除以0的情況 Returns ------- route: 返回最后一層卷積的前一層結(jié)果 conv: 返回最后一層卷積的結(jié)果 conv_index: conv層計(jì)數(shù) """ conv = self._conv2d_layer(inputs, filters_num = filters_num, kernel_size = 1, strides = 1, name = "conv2d_" + str(conv_index)) conv = self._batch_normalization_layer(conv, name = "batch_normalization_" + str(conv_index), training = training, norm_decay = norm_decay, norm_epsilon = norm_epsilon) conv_index += 1 conv = self._conv2d_layer(conv, filters_num = filters_num * 2, kernel_size = 3, strides = 1, name = "conv2d_" + str(conv_index)) conv = self._batch_normalization_layer(conv, name = "batch_normalization_" + str(conv_index), training = training, norm_decay = norm_decay, norm_epsilon = norm_epsilon) conv_index += 1 conv = self._conv2d_layer(conv, filters_num = filters_num, kernel_size = 1, strides = 1, name = "conv2d_" + str(conv_index)) conv = self._batch_normalization_layer(conv, name = "batch_normalization_" + str(conv_index), training = training, norm_decay = norm_decay, norm_epsilon = norm_epsilon) conv_index += 1 conv = self._conv2d_layer(conv, filters_num = filters_num * 2, kernel_size = 3, strides = 1, name = "conv2d_" + str(conv_index)) conv = self._batch_normalization_layer(conv, name = "batch_normalization_" + str(conv_index), training = training, norm_decay = norm_decay, norm_epsilon = norm_epsilon) conv_index += 1 conv = self._conv2d_layer(conv, filters_num = filters_num, kernel_size = 1, strides = 1, name = "conv2d_" + str(conv_index)) conv = self._batch_normalization_layer(conv, name = "batch_normalization_" + str(conv_index), training = training, norm_decay = norm_decay, norm_epsilon = norm_epsilon) conv_index += 1 route = conv conv = self._conv2d_layer(conv, filters_num = filters_num * 2, kernel_size = 3, strides = 1, name = "conv2d_" + str(conv_index)) conv = self._batch_normalization_layer(conv, name = "batch_normalization_" + str(conv_index), training = training, norm_decay = norm_decay, norm_epsilon = norm_epsilon) conv_index += 1 conv = self._conv2d_layer(conv, filters_num = out_filters, kernel_size = 1, strides = 1, name = "conv2d_" + str(conv_index), use_bias = True) conv_index += 1 return route, conv, conv_index # 返回三個(gè)特征層的內(nèi)容 def yolo_inference(self, inputs, num_anchors, num_classes, training = True): """ Introduction ------------ 構(gòu)建yolo模型結(jié)構(gòu) Parameters ---------- inputs: 模型的輸入變量 num_anchors: 每個(gè)grid cell負(fù)責(zé)檢測(cè)的anchor數(shù)量 num_classes: 類(lèi)別數(shù)量 training: 是否為訓(xùn)練模式 """ conv_index = 1 # route1 = 52,52,256、route2 = 26,26,512、route3 = 13,13,1024 conv2d_26, conv2d_43, conv, conv_index = self._darknet53(inputs, conv_index, training = training, norm_decay = self.norm_decay, norm_epsilon = self.norm_epsilon) with tf.variable_scope('yolo'): #--------------------------------------# # 獲得第一個(gè)特征層 #--------------------------------------# # conv2d_57 = 13,13,512,conv2d_59 = 13,13,255(3x(80+5)) conv2d_57, conv2d_59, conv_index = self._yolo_block(conv, 512, num_anchors * (num_classes + 5), conv_index = conv_index, training = training, norm_decay = self.norm_decay, norm_epsilon = self.norm_epsilon) #--------------------------------------# # 獲得第二個(gè)特征層 #--------------------------------------# conv2d_60 = self._conv2d_layer(conv2d_57, filters_num = 256, kernel_size = 1, strides = 1, name = "conv2d_" + str(conv_index)) conv2d_60 = self._batch_normalization_layer(conv2d_60, name = "batch_normalization_" + str(conv_index),training = training, norm_decay = self.norm_decay, norm_epsilon = self.norm_epsilon) conv_index += 1 # unSample_0 = 26,26,256 unSample_0 = tf.image.resize_nearest_neighbor(conv2d_60, [2 * tf.shape(conv2d_60)[1], 2 * tf.shape(conv2d_60)[1]], name='upSample_0') # route0 = 26,26,768 route0 = tf.concat([unSample_0, conv2d_43], axis = -1, name = 'route_0') # conv2d_65 = 52,52,256,conv2d_67 = 26,26,255 conv2d_65, conv2d_67, conv_index = self._yolo_block(route0, 256, num_anchors * (num_classes + 5), conv_index = conv_index, training = training, norm_decay = self.norm_decay, norm_epsilon = self.norm_epsilon) #--------------------------------------# # 獲得第三個(gè)特征層 #--------------------------------------# conv2d_68 = self._conv2d_layer(conv2d_65, filters_num = 128, kernel_size = 1, strides = 1, name = "conv2d_" + str(conv_index)) conv2d_68 = self._batch_normalization_layer(conv2d_68, name = "batch_normalization_" + str(conv_index), training=training, norm_decay=self.norm_decay, norm_epsilon = self.norm_epsilon) conv_index += 1 # unSample_1 = 52,52,128 unSample_1 = tf.image.resize_nearest_neighbor(conv2d_68, [2 * tf.shape(conv2d_68)[1], 2 * tf.shape(conv2d_68)[1]], name='upSample_1') # route1= 52,52,384 route1 = tf.concat([unSample_1, conv2d_26], axis = -1, name = 'route_1') # conv2d_75 = 52,52,255 _, conv2d_75, _ = self._yolo_block(route1, 128, num_anchors * (num_classes + 5), conv_index = conv_index, training = training, norm_decay = self.norm_decay, norm_epsilon = self.norm_epsilon) return [conv2d_59, conv2d_67, conv2d_75]
2、利用先驗(yàn)框?qū)W(wǎng)絡(luò)的輸出進(jìn)行解碼
yolo3的先驗(yàn)框生成與yolo2的類(lèi)似,如果不明白先驗(yàn)框是如何生成的,可以看我的上一篇博文
python目標(biāo)檢測(cè)yolo2詳解及其預(yù)測(cè)代碼復(fù)現(xiàn)
其實(shí)yolo3的解碼與yolo2的解碼過(guò)程一樣,只是對(duì)于yolo3而言,其需要對(duì)三個(gè)特征層進(jìn)行解碼,三個(gè)特征層的shape分別為(N,13,13,255),(N,26,26,255),(N,52,52,255)的數(shù)據(jù),對(duì)應(yīng)每個(gè)圖分為13x13、26x26、52x52的網(wǎng)格上3個(gè)先驗(yàn)框的位置。
此處需要用到一個(gè)循環(huán)。
1、將第一個(gè)特征層reshape成[-1, 13, 13, 3, 80 + 5],代表169個(gè)中心點(diǎn)每個(gè)中心點(diǎn)的3個(gè)先驗(yàn)框的情況。
2、將80+5的5中的xywh分離出來(lái),0、1是xy相對(duì)中心點(diǎn)的偏移量;2、3是寬和高的情況;4是置信度。
3、建立13x13的網(wǎng)格,代表圖片進(jìn)行13x13處理后網(wǎng)格的中心點(diǎn)。
4、利用計(jì)算公式計(jì)算實(shí)際的bbox的位置 。
5、置信度乘上80+5中的80(這里的80指的是類(lèi)別概率)得到得分。
6、將第二個(gè)特征層reshape成[-1, 26, 26, 3, 80 + 5],重復(fù)2到5步。將第三個(gè)特征層reshape成[-1, 52, 52, 3, 80 + 5],重復(fù)2到5步。
單個(gè)特征層的解碼部分代碼如下:
# 獲得單個(gè)特征層框的位置和得分 def boxes_and_scores(self, feats, anchors, classes_num, input_shape, image_shape): """ Introduction ------------ 將預(yù)測(cè)出的box坐標(biāo)轉(zhuǎn)換為對(duì)應(yīng)原圖的坐標(biāo),然后計(jì)算每個(gè)box的分?jǐn)?shù) Parameters ---------- feats: yolo輸出的feature map anchors: anchor的位置 class_num: 類(lèi)別數(shù)目 input_shape: 輸入大小 image_shape: 圖片大小 Returns ------- boxes: 物體框的位置 boxes_scores: 物體框的分?jǐn)?shù),為置信度和類(lèi)別概率的乘積 """ # 獲得特征 box_xy, box_wh, box_confidence, box_class_probs = self._get_feats(feats, anchors, classes_num, input_shape) # 尋找在原圖上的位置 boxes = self.correct_boxes(box_xy, box_wh, input_shape, image_shape) boxes = tf.reshape(boxes, [-1, 4]) # 獲得置信度box_confidence * box_class_probs box_scores = box_confidence * box_class_probs box_scores = tf.reshape(box_scores, [-1, classes_num]) return boxes, box_scores # 單個(gè)特征層的解碼過(guò)程 def _get_feats(self, feats, anchors, num_classes, input_shape): """ Introduction ------------ 根據(jù)yolo最后一層的輸出確定bounding box Parameters ---------- feats: yolo模型最后一層輸出 anchors: anchors的位置 num_classes: 類(lèi)別數(shù)量 input_shape: 輸入大小 Returns ------- box_xy, box_wh, box_confidence, box_class_probs """ num_anchors = len(anchors) anchors_tensor = tf.reshape(tf.constant(anchors, dtype=tf.float32), [1, 1, 1, num_anchors, 2]) grid_size = tf.shape(feats)[1:3] predictions = tf.reshape(feats, [-1, grid_size[0], grid_size[1], num_anchors, num_classes + 5]) # 這里構(gòu)建13*13*1*2的矩陣,對(duì)應(yīng)每個(gè)格子加上對(duì)應(yīng)的坐標(biāo) grid_y = tf.tile(tf.reshape(tf.range(grid_size[0]), [-1, 1, 1, 1]), [1, grid_size[1], 1, 1]) grid_x = tf.tile(tf.reshape(tf.range(grid_size[1]), [1, -1, 1, 1]), [grid_size[0], 1, 1, 1]) grid = tf.concat([grid_x, grid_y], axis = -1) grid = tf.cast(grid, tf.float32) # 將x,y坐標(biāo)歸一化,相對(duì)網(wǎng)格的位置 box_xy = (tf.sigmoid(predictions[..., :2]) + grid) / tf.cast(grid_size[::-1], tf.float32) # 將w,h也歸一化 box_wh = tf.exp(predictions[..., 2:4]) * anchors_tensor / tf.cast(input_shape[::-1], tf.float32) box_confidence = tf.sigmoid(predictions[..., 4:5]) box_class_probs = tf.sigmoid(predictions[..., 5:]) return box_xy, box_wh, box_confidence, box_class_probs
該函數(shù)被其它函數(shù)調(diào)用,用于完成三個(gè)特征層的解碼:
def eval(self, yolo_outputs, image_shape, max_boxes = 20): """ Introduction ------------ 根據(jù)Yolo模型的輸出進(jìn)行非極大值抑制,獲取最后的物體檢測(cè)框和物體檢測(cè)類(lèi)別 Parameters ---------- yolo_outputs: yolo模型輸出 image_shape: 圖片的大小 max_boxes: 最大box數(shù)量 Returns ------- boxes_: 物體框的位置 scores_: 物體類(lèi)別的概率 classes_: 物體類(lèi)別 """ # 每一個(gè)特征層對(duì)應(yīng)三個(gè)先驗(yàn)框 anchor_mask = [[6, 7, 8], [3, 4, 5], [0, 1, 2]] boxes = [] box_scores = [] # inputshape是416x416 # image_shape是實(shí)際圖片的大小 input_shape = tf.shape(yolo_outputs[0])[1 : 3] * 32 # 對(duì)三個(gè)特征層的輸出獲取每個(gè)預(yù)測(cè)box坐標(biāo)和box的分?jǐn)?shù),score = 置信度x類(lèi)別概率 #---------------------------------------# # 對(duì)三個(gè)特征層解碼 # 獲得分?jǐn)?shù)和框的位置 #---------------------------------------# for i in range(len(yolo_outputs)): _boxes, _box_scores = self.boxes_and_scores(yolo_outputs[i], self.anchors[anchor_mask[i]], len(self.class_names), input_shape, image_shape) boxes.append(_boxes) box_scores.append(_box_scores) # 放在一行里面便于操作 boxes = tf.concat(boxes, axis = 0) box_scores = tf.concat(box_scores, axis = 0)
3、進(jìn)行得分排序與非極大抑制篩選
這一部分基本上是所有目標(biāo)檢測(cè)通用的部分。不過(guò)該項(xiàng)目的處理方式與其它項(xiàng)目不同。其對(duì)于每一個(gè)類(lèi)進(jìn)行判別。
1、取出每一類(lèi)得分大于self.obj_threshold的框和得分。
2、利用框的位置和得分進(jìn)行非極大抑制。
實(shí)現(xiàn)代碼如下:
#---------------------------------------# # 1、取出每一類(lèi)得分大于self.obj_threshold # 的框和得分 # 2、對(duì)得分進(jìn)行非極大抑制 #---------------------------------------# # 對(duì)每一個(gè)類(lèi)進(jìn)行判斷 for c in range(len(self.class_names)): # 取出所有類(lèi)為c的box class_boxes = tf.boolean_mask(boxes, mask[:, c]) # 取出所有類(lèi)為c的分?jǐn)?shù) class_box_scores = tf.boolean_mask(box_scores[:, c], mask[:, c]) # 非極大抑制 nms_index = tf.image.non_max_suppression(class_boxes, class_box_scores, max_boxes_tensor, iou_threshold = self.nms_threshold) # 獲取非極大抑制的結(jié)果 class_boxes = tf.gather(class_boxes, nms_index) class_box_scores = tf.gather(class_box_scores, nms_index) classes = tf.ones_like(class_box_scores, 'int32') * c boxes_.append(class_boxes) scores_.append(class_box_scores) classes_.append(classes) boxes_ = tf.concat(boxes_, axis = 0) scores_ = tf.concat(scores_, axis = 0) classes_ = tf.concat(classes_, axis = 0)
實(shí)際上該部分與第二部分在一個(gè)函數(shù)里,完成輸出的解碼和篩選,完成預(yù)測(cè)過(guò)程。
def eval(self, yolo_outputs, image_shape, max_boxes = 20): """ Introduction ------------ 根據(jù)Yolo模型的輸出進(jìn)行非極大值抑制,獲取最后的物體檢測(cè)框和物體檢測(cè)類(lèi)別 Parameters ---------- yolo_outputs: yolo模型輸出 image_shape: 圖片的大小 max_boxes: 最大box數(shù)量 Returns ------- boxes_: 物體框的位置 scores_: 物體類(lèi)別的概率 classes_: 物體類(lèi)別 """ # 每一個(gè)特征層對(duì)應(yīng)三個(gè)先驗(yàn)框 anchor_mask = [[6, 7, 8], [3, 4, 5], [0, 1, 2]] boxes = [] box_scores = [] # inputshape是416x416 # image_shape是實(shí)際圖片的大小 input_shape = tf.shape(yolo_outputs[0])[1 : 3] * 32 # 對(duì)三個(gè)特征層的輸出獲取每個(gè)預(yù)測(cè)box坐標(biāo)和box的分?jǐn)?shù),score = 置信度x類(lèi)別概率 #---------------------------------------# # 對(duì)三個(gè)特征層解碼 # 獲得分?jǐn)?shù)和框的位置 #---------------------------------------# for i in range(len(yolo_outputs)): _boxes, _box_scores = self.boxes_and_scores(yolo_outputs[i], self.anchors[anchor_mask[i]], len(self.class_names), input_shape, image_shape) boxes.append(_boxes) box_scores.append(_box_scores) # 放在一行里面便于操作 boxes = tf.concat(boxes, axis = 0) box_scores = tf.concat(box_scores, axis = 0) mask = box_scores >= self.obj_threshold max_boxes_tensor = tf.constant(max_boxes, dtype = tf.int32) boxes_ = [] scores_ = [] classes_ = [] #---------------------------------------# # 1、取出每一類(lèi)得分大于self.obj_threshold # 的框和得分 # 2、對(duì)得分進(jìn)行非極大抑制 #---------------------------------------# # 對(duì)每一個(gè)類(lèi)進(jìn)行判斷 for c in range(len(self.class_names)): # 取出所有類(lèi)為c的box class_boxes = tf.boolean_mask(boxes, mask[:, c]) # 取出所有類(lèi)為c的分?jǐn)?shù) class_box_scores = tf.boolean_mask(box_scores[:, c], mask[:, c]) # 非極大抑制 nms_index = tf.image.non_max_suppression(class_boxes, class_box_scores, max_boxes_tensor, iou_threshold = self.nms_threshold) # 獲取非極大抑制的結(jié)果 class_boxes = tf.gather(class_boxes, nms_index) class_box_scores = tf.gather(class_box_scores, nms_index) classes = tf.ones_like(class_box_scores, 'int32') * c boxes_.append(class_boxes) scores_.append(class_box_scores) classes_.append(classes) boxes_ = tf.concat(boxes_, axis = 0) scores_ = tf.concat(scores_, axis = 0) classes_ = tf.concat(classes_, axis = 0) return boxes_, scores_, classes_
得到框的位置和種類(lèi)后就可以畫(huà)圖了。
實(shí)現(xiàn)結(jié)果
以上就是python目標(biāo)檢測(cè)yolo3詳解預(yù)測(cè)及代碼復(fù)現(xiàn)的詳細(xì)內(nèi)容,更多關(guān)于yolo3預(yù)測(cè)復(fù)現(xiàn)的資料請(qǐng)關(guān)注腳本之家其它相關(guān)文章!
- Python與AI分析時(shí)間序列數(shù)據(jù)
- python編程開(kāi)發(fā)時(shí)間序列calendar模塊示例詳解
- Python中LSTM回歸神經(jīng)網(wǎng)絡(luò)時(shí)間序列預(yù)測(cè)詳情
- 時(shí)間序列預(yù)測(cè)中的數(shù)據(jù)滑窗操作實(shí)例(python實(shí)現(xiàn))
- Python?sklearn預(yù)測(cè)評(píng)估指標(biāo)混淆矩陣計(jì)算示例詳解
- 一文詳解Python灰色預(yù)測(cè)模型實(shí)現(xiàn)示例
- python合并RepeatMasker預(yù)測(cè)結(jié)果中染色體的overlap區(qū)域
- python?Prophet時(shí)間序列預(yù)測(cè)工具庫(kù)使用功能探索
相關(guān)文章
OpenCV物體跟蹤樹(shù)莓派視覺(jué)小車(chē)實(shí)現(xiàn)過(guò)程學(xué)習(xí)
這篇文章主要介紹了OpenCV物體跟蹤樹(shù)莓派視覺(jué)小車(chē)的實(shí)現(xiàn)過(guò)程學(xué)習(xí),有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步2021-10-10Python3 shutil(高級(jí)文件操作模塊)實(shí)例用法總結(jié)
在本篇文章里小編給大家整理的是一篇關(guān)于Python3 shutil實(shí)例用法內(nèi)容,有興趣的朋友們可以學(xué)習(xí)下。2020-02-02python matplotlib如何給圖中的點(diǎn)加標(biāo)簽
這篇文章主要介紹了python matplotlib給圖中的點(diǎn)加標(biāo)簽,文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友可以參考下2019-11-11利用Python繪制有趣的萬(wàn)圣節(jié)南瓜怪效果
這篇文章主要介紹了用Python繪制有趣的萬(wàn)圣節(jié)南瓜怪效果,本文實(shí)例圖文相結(jié)合給大家介紹的非常詳細(xì),具有一定的參考借鑒價(jià)值,需要的朋友可以參考下2019-10-10通過(guò)celery異步處理一個(gè)查詢?nèi)蝿?wù)的完整代碼
今天小編就為大家分享一篇通過(guò)celery異步處理一個(gè)查詢?nèi)蝿?wù)的完整代碼,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過(guò)來(lái)看看吧2019-11-11批量將ppt轉(zhuǎn)換為pdf的Python代碼 只要27行!
這篇文章主要為大家詳細(xì)介紹了批量將ppt轉(zhuǎn)換為pdf的Python代碼,只要27行,具有一定的參考價(jià)值,感興趣的小伙伴們可以參考一下2018-02-02python實(shí)現(xiàn)自動(dòng)發(fā)送郵件
這篇文章主要為大家詳細(xì)介紹了python實(shí)現(xiàn)自動(dòng)發(fā)送郵件功能,具有一定的參考價(jià)值,感興趣的小伙伴們可以參考一下2018-06-06Python中使用logging和traceback模塊記錄日志和跟蹤異常
今天小編就為大家分享一篇關(guān)于Python中使用logging和traceback模塊記錄日志和跟蹤異常,小編覺(jué)得內(nèi)容挺不錯(cuò)的,現(xiàn)在分享給大家,具有很好的參考價(jià)值,需要的朋友一起跟隨小編來(lái)看看吧2019-04-04OpenCV讀取與寫(xiě)入圖片的實(shí)現(xiàn)
這篇文章主要介紹了OpenCV讀取與寫(xiě)入圖片的實(shí)現(xiàn),文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧2020-10-10