欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

Python實現(xiàn)視頻目標(biāo)檢測與軌跡跟蹤流程詳解

 更新時間:2023年01月28日 09:38:00   作者:傾城一少  
通過閱讀相關(guān)文獻(xiàn)及測試,找到了一種基于多模板匹配的改進(jìn)方法,可以對遙感視頻衛(wèi)星中的移動目標(biāo)進(jìn)行探測,并繪制其軌跡。根據(jù)實驗結(jié)果發(fā)現(xiàn),可以比較有效的對運(yùn)動目標(biāo)進(jìn)行跟蹤

一、原理

核心思想比較簡單。即通過不同旋轉(zhuǎn)角度的模板同時匹配,在多個結(jié)果中,找到相似度最大的結(jié)果,即認(rèn)為匹配成功。 在視頻的某一幀將這些模板分別進(jìn)行匹配,即可獲得較為準(zhǔn)確的結(jié)果。

某一幀的物體搜索窗口如上圖所示。0°表示提取的原始模板,將原始模板以8個方向進(jìn)行旋轉(zhuǎn),可得到8個不同旋轉(zhuǎn)角度的模板。 依次與窗口進(jìn)行模板匹配,可以得到相似度。取相似度最大的模板對應(yīng)的坐標(biāo)結(jié)果作為軌跡。

同時根據(jù)不同的精度需求,可以有4模板、8模板和16模板,對應(yīng)方向如下。模板數(shù)目越多,其對旋轉(zhuǎn)的檢測性就越好、越精確。但同時計算量也會成倍增加。

二、代碼實現(xiàn)

# coding=utf-8
import cv2
import numpy as np
import math
def calcVelocity(x1, x2, y1, y2, res, wT):
    dist = pow(pow(y1 - y2, 2) + pow(x1 - x2, 2), 0.5) * res
    v = dist / (wT / 1000.0) * 3.6
    return v
# ---------------必要參數(shù)---------------
# 待識別視頻路徑
video_path = 'E:\\object\\test_real.mp4'
# 衛(wèi)星視頻地表分辨率
resolution = 2
# 估計最快運(yùn)動速度
velocity = 850
# ---------------必要參數(shù)---------------
# ---------------可選參數(shù)---------------
# 提取的模板是否為正方形
isSquare = True
# 是否自動根據(jù)速度信息計算閾值
isAutoDisThresh = True
# 是否為多模板
isMultiTemplate = True
# 是否采用均值對軌跡進(jìn)行平滑
isSmooth = True
# 相鄰軌跡點之間的距離閾值
dis_thresh = 10
# 多模板個數(shù)
templateNum = 8
# 初始待選窗口大小半徑
range_d = 30
# 灰度閾值敏感度,越大灰度閾值越低
gray_factor = 0.2
# 識別框縮放因子,越大繪制的識別框越大
scale_factor = 1.5
# 模板縮放因子,越大模板圖像越大
template_factor = 0.6
# 識別框顏色
color = (0, 0, 255)
# 輸出路徑
parent_path = video_path.replace(video_path.split("\\")[-1], '')
out_path = parent_path + "object.avi"
out_path2 = parent_path + "track.avi"
out_path3 = parent_path + "points.txt"
out_path4 = parent_path + "velocity.txt"
out_path5 = parent_path + "template.jpg"
# ---------------可選參數(shù)---------------
# 循環(huán)變量
count = 0
# 打開視頻
cap = cv2.VideoCapture(video_path)
cap2 = cv2.VideoCapture(video_path)
# 獲取視頻圖像大小
# video_h對應(yīng)豎直方向,video_w對應(yīng)水平方向
video_h = int(cap.get(4))
video_w = int(cap.get(3))
total = int(cap.get(7))
# 新建一張與視頻等大的影像用于繪制軌跡
track = np.zeros((video_h, video_w, 3), np.uint8)
# tlp用于存放待選窗口的左上角點
tlp = []
# rbp用于存放待選窗口的右下角點
rbp = []
# bottom_right_points用于存放目標(biāo)區(qū)域的右下角點
bottom_right_points = []
# center_points用于存放目標(biāo)區(qū)域的中心點
center_points = []
# trackPoints用于存放目標(biāo)區(qū)域的左上角點
trackPoints = []
# Vs用于存放目標(biāo)各幀速度
Vs = []
# 根據(jù)視頻信息計算每一幀的等待時間
if cap.get(5) != 0:
    waitTime = int(1000.0 / cap.get(5))
    fps = cap.get(5)
# 如果為真,則自動確定距離閾值
if isAutoDisThresh:
    # 計算物體幀間最大運(yùn)動范圍(像素)
    max_range = math.ceil((5.0 * velocity) / (18.0 * resolution * (fps - 1)))
    # 計算最大移動距離,作為閾值
    dis_thresh = math.ceil(pow(pow(max_range, 2) + pow(max_range, 2), 0.5))
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter(out_path, fourcc, fps, (video_w, video_h))
out2 = cv2.VideoWriter(out_path2, fourcc, fps, (video_w, video_h))
# 首先提取模板圖像
if cap2.isOpened():
    # 讀取前兩幀
    ret, frame1 = cap2.read()
    ret, frame2 = cap2.read()
    # 相減做差
    sub = cv2.subtract(frame1, frame2)
    # 得到的結(jié)果灰度化
    gray = cv2.cvtColor(sub, cv2.COLOR_BGR2GRAY)
    # 判斷作差后的結(jié)果是否全為0
    if gray.max() != 0:
        # 找到最大值位置
        loc = np.where(gray == gray.max())
        loc_x = loc[1][0]
        loc_y = loc[0][0]
        # 以loc為中心,range_d為距離向外拓展得到window
        win_tl_x = loc_x - range_d
        win_tl_y = loc_y - range_d
        win_rb_x = loc_x + range_d
        win_rb_y = loc_y + range_d
        # 一些越界的判斷
        if win_tl_x < 0:
            win_tl_x = 0
        if win_tl_y < 0:
            win_tl_y = 0
        if win_rb_x > video_w:
            win_rb_x = video_w
        if win_rb_y > video_h:
            win_rb_y = video_h
        # 根據(jù)窗口坐標(biāo)提取窗口內(nèi)容
        win_ini = cv2.cvtColor(frame1[win_tl_y:win_rb_y, win_tl_x:win_rb_x, :], cv2.COLOR_BGR2GRAY)
        # 獲取最大值位置對應(yīng)的灰度值
        tem_img = cv2.cvtColor(frame1, cv2.COLOR_BGR2GRAY)
        # 由最大值對應(yīng)灰度值計算合適的灰度閾值
        gray_thresh = tem_img[loc_y, loc_x] - gray_factor * tem_img[loc_y, loc_x]
        # 初始窗口二值化處理
        ret, thresh = cv2.threshold(win_ini, gray_thresh, 255, cv2.THRESH_BINARY)
        # 在初始窗口中尋找輪廓
        img2, contours, hi = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
        # 有可能找到多個輪廓,但認(rèn)為包含點數(shù)最多的那個輪廓是要找的輪廓
        length = []
        for item in contours:
            length.append(item.shape[0])
        target_contour = contours[length.index(max(length))]
        # 獲取目標(biāo)輪廓的坐標(biāo)信息
        x, y, w, h = cv2.boundingRect(target_contour)
        if isSquare:
            # 保證提取的模板為正方形
            tem_tl_x = win_tl_x + x
            tem_tl_y = win_tl_y + y
            tem_rb_x = win_tl_x + x + w
            tem_rb_y = win_tl_y + y + h
            center_x = (tem_tl_x + tem_rb_x) / 2
            center_y = (tem_tl_y + tem_rb_y) / 2
            delta = int(template_factor * max(w, h))
            real_tl_x = center_x - delta
            real_rb_x = center_x + delta
            real_tl_y = center_y - delta
            real_rb_y = center_y + delta
        else:
            # 不保證模板為正方形
            real_tl_x = win_tl_x + x
            real_tl_y = win_tl_y + y
            real_rb_x = win_tl_x + x + w
            real_rb_y = win_tl_y + y + h
        # 一些越界判斷
        if real_tl_x < 0:
            real_tl_x = 0
        if real_tl_y < 0:
            real_tl_y = 0
        if real_rb_x > video_w:
            real_rb_x = video_w
        if real_rb_y > video_h:
            real_rb_y = video_h
        # 提取模板內(nèi)容
        template = frame1[real_tl_y:real_rb_y, real_tl_x:real_rb_x, :]
        # 獲取模板的寬高,h豎直方向,w水平方向
        h = template.shape[0]
        w = template.shape[1]
        d = max(w, h)
        # 是否是多模板匹配
        if isMultiTemplate:
            if templateNum == 16:
                M22_5 = cv2.getRotationMatrix2D((d / 2, d / 2), -22.5, 1)
                M45 = cv2.getRotationMatrix2D((d / 2, d / 2), -45, 1)
                M67_5 = cv2.getRotationMatrix2D((d / 2, d / 2), -67.5, 1)
                M90 = cv2.getRotationMatrix2D((d / 2, d / 2), -90, 1)
                M112_5 = cv2.getRotationMatrix2D((d / 2, d / 2), -112.5, 1)
                M135 = cv2.getRotationMatrix2D((d / 2, d / 2), -135, 1)
                M157_5 = cv2.getRotationMatrix2D((d / 2, d / 2), -157.5, 1)
                M180 = cv2.getRotationMatrix2D((d / 2, d / 2), -180, 1)
                M202_5 = cv2.getRotationMatrix2D((d / 2, d / 2), -202.5, 1)
                M225 = cv2.getRotationMatrix2D((d / 2, d / 2), -225, 1)
                M247_5 = cv2.getRotationMatrix2D((d / 2, d / 2), -247.5, 1)
                M270 = cv2.getRotationMatrix2D((d / 2, d / 2), -270, 1)
                M292_5 = cv2.getRotationMatrix2D((d / 2, d / 2), -292.5, 1)
                M315 = cv2.getRotationMatrix2D((d / 2, d / 2), -315, 1)
                M337_5 = cv2.getRotationMatrix2D((d / 2, d / 2), -337.5, 1)
                template22_5 = cv2.warpAffine(template, M22_5, (d, d))
                template45 = cv2.warpAffine(template, M45, (d, d))
                template67_5 = cv2.warpAffine(template, M67_5, (d, d))
                template90 = cv2.warpAffine(template, M90, (d, d))
                template112_5 = cv2.warpAffine(template, M112_5, (d, d))
                template135 = cv2.warpAffine(template, M135, (d, d))
                template157_5 = cv2.warpAffine(template, M157_5, (d, d))
                template180 = cv2.warpAffine(template, M180, (d, d))
                template202_5 = cv2.warpAffine(template, M202_5, (d, d))
                template225 = cv2.warpAffine(template, M225, (d, d))
                template247_5 = cv2.warpAffine(template, M247_5, (d, d))
                template270 = cv2.warpAffine(template, M270, (d, d))
                template292_5 = cv2.warpAffine(template, M292_5, (d, d))
                template315 = cv2.warpAffine(template, M315, (d, d))
                template337_5 = cv2.warpAffine(template, M337_5, (d, d))
            elif templateNum == 8:
                M45 = cv2.getRotationMatrix2D((d / 2, d / 2), -45, 1)
                M90 = cv2.getRotationMatrix2D((d / 2, d / 2), -90, 1)
                M135 = cv2.getRotationMatrix2D((d / 2, d / 2), -135, 1)
                M180 = cv2.getRotationMatrix2D((d / 2, d / 2), -180, 1)
                M225 = cv2.getRotationMatrix2D((d / 2, d / 2), -225, 1)
                M270 = cv2.getRotationMatrix2D((d / 2, d / 2), -270, 1)
                M315 = cv2.getRotationMatrix2D((d / 2, d / 2), -315, 1)
                template45 = cv2.warpAffine(template, M45, (d, d))
                template90 = cv2.warpAffine(template, M90, (d, d))
                template135 = cv2.warpAffine(template, M135, (d, d))
                template180 = cv2.warpAffine(template, M180, (d, d))
                template225 = cv2.warpAffine(template, M225, (d, d))
                template270 = cv2.warpAffine(template, M270, (d, d))
                template315 = cv2.warpAffine(template, M315, (d, d))
            elif templateNum == 4:
                M90 = cv2.getRotationMatrix2D((d / 2, d / 2), -90, 1)
                M180 = cv2.getRotationMatrix2D((d / 2, d / 2), -180, 1)
                M270 = cv2.getRotationMatrix2D((d / 2, d / 2), -270, 1)
                template90 = cv2.warpAffine(template, M90, (d, d))
                template180 = cv2.warpAffine(template, M180, (d, d))
                template270 = cv2.warpAffine(template, M270, (d, d))
        cv2.imshow("Template", template)
        cv2.imwrite(out_path5, template)
        offset = int(scale_factor * d)
        # 計算待選窗口左上角點坐標(biāo)
        tlx = loc_x - d
        tly = loc_y - d
        # 判斷是否越界,越界則設(shè)置為0
        if tlx < 0:
            tlx = 0
        if tly < 0:
            tly = 0
        range_tl = (tlx, tly)
        # 計算待選窗口右下角點坐標(biāo)
        rbx = loc_x + w + d
        rby = loc_y + h + d
        # 判斷是否越界,越界設(shè)置為視頻長寬最大值
        if rbx > video_w:
            rbx = video_w
        if rby > video_h:
            rby = video_h
        range_rb = (rbx, rby)
        # 放入角點坐標(biāo)列表
        tlp.append(range_tl)
        rbp.append(range_rb)
        cap2.release()
# 然后進(jìn)行模板匹配
while cap.isOpened():
    # 讀取每幀內(nèi)容
    ret, frame = cap.read()
    # 判斷幀內(nèi)容是否為空,不為空繼續(xù)
    if frame is None:
        break
    else:
        # 是否為多模板匹配模式
        if isMultiTemplate:
            if templateNum == 16:
                # 逐個模板進(jìn)行匹配
                res = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :], template,
                                        cv2.TM_CCOEFF_NORMED)
                res22_5 = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :],
                                            template22_5,
                                            cv2.TM_CCOEFF_NORMED)
                res67_5 = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :],
                                            template67_5,
                                            cv2.TM_CCOEFF_NORMED)
                res112_5 = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :],
                                             template112_5,
                                             cv2.TM_CCOEFF_NORMED)
                res157_5 = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :],
                                             template157_5,
                                             cv2.TM_CCOEFF_NORMED)
                res202_5 = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :],
                                             template202_5,
                                             cv2.TM_CCOEFF_NORMED)
                res247_5 = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :],
                                             template247_5,
                                             cv2.TM_CCOEFF_NORMED)
                res292_5 = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :],
                                             template292_5,
                                             cv2.TM_CCOEFF_NORMED)
                res337_5 = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :],
                                             template337_5,
                                             cv2.TM_CCOEFF_NORMED)
                res90 = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :],
                                          template90,
                                          cv2.TM_CCOEFF_NORMED)
                res180 = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :],
                                           template180,
                                           cv2.TM_CCOEFF_NORMED)
                res270 = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :],
                                           template270,
                                           cv2.TM_CCOEFF_NORMED)
                res45 = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :],
                                          template45,
                                          cv2.TM_CCOEFF_NORMED)
                res135 = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :],
                                           template135,
                                           cv2.TM_CCOEFF_NORMED)
                res225 = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :],
                                           template225,
                                           cv2.TM_CCOEFF_NORMED)
                res315 = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :],
                                           template315,
                                           cv2.TM_CCOEFF_NORMED)
                # 獲取各模板對應(yīng)的最大值
                m22_5 = np.max(res22_5)
                m67_5 = np.max(res67_5)
                m112_5 = np.max(res112_5)
                m157_5 = np.max(res157_5)
                m202_5 = np.max(res202_5)
                m247_5 = np.max(res247_5)
                m292_5 = np.max(res292_5)
                m337_5 = np.max(res337_5)
                m45 = np.max(res45)
                m135 = np.max(res135)
                m225 = np.max(res225)
                m315 = np.max(res315)
                m0 = np.max(res)
                m90 = np.max(res90)
                m180 = np.max(res180)
                m270 = np.max(res270)
                # 尋找最佳匹配結(jié)果
                m = max(m0, m22_5, m45, m67_5, m90,
                        m112_5, m135, m157_5, m180,
                        m202_5, m225, m247_5, m270,
                        m292_5, m315, m337_5)
                # 獲取最佳匹配結(jié)果對應(yīng)的坐標(biāo)信息
                if m == m0:
                    mIndex = 0
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
                elif m == m90:
                    mIndex = 90
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res90)
                elif m == m180:
                    mIndex = 180
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res180)
                elif m == m270:
                    mIndex = 270
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res270)
                elif m == m45:
                    mIndex = 45
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res45)
                elif m == m135:
                    mIndex = 135
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res135)
                elif m == m225:
                    mIndex = 225
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res225)
                elif m == m315:
                    mIndex = 315
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res315)
                elif m == m22_5:
                    mIndex = 22.5
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res22_5)
                elif m == m67_5:
                    mIndex = 67.5
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res67_5)
                elif m == m112_5:
                    mIndex = 112.5
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res112_5)
                elif m == m157_5:
                    mIndex = 157.5
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res157_5)
                elif m == m202_5:
                    mIndex = 202.5
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res202_5)
                elif m == m247_5:
                    mIndex = 247.5
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res247_5)
                elif m == m292_5:
                    mIndex = 292.5
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res292_5)
                elif m == m337_5:
                    mIndex = 337.5
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res337_5)
            elif templateNum == 8:
                res = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :], template,
                                        cv2.TM_CCOEFF_NORMED)
                res90 = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :],
                                          template90,
                                          cv2.TM_CCOEFF_NORMED)
                res180 = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :],
                                           template180,
                                           cv2.TM_CCOEFF_NORMED)
                res270 = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :],
                                           template270,
                                           cv2.TM_CCOEFF_NORMED)
                res45 = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :],
                                          template45,
                                          cv2.TM_CCOEFF_NORMED)
                res135 = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :],
                                           template135,
                                           cv2.TM_CCOEFF_NORMED)
                res225 = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :],
                                           template225,
                                           cv2.TM_CCOEFF_NORMED)
                res315 = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :],
                                           template315,
                                           cv2.TM_CCOEFF_NORMED)
                m45 = np.max(res45)
                m135 = np.max(res135)
                m225 = np.max(res225)
                m315 = np.max(res315)
                m0 = np.max(res)
                m90 = np.max(res90)
                m180 = np.max(res180)
                m270 = np.max(res270)
                m = max(m0, m45, m90, m135, m180, m225, m270, m315)
                if m == m0:
                    mIndex = 0
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
                elif m == m90:
                    mIndex = 90
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res90)
                elif m == m180:
                    mIndex = 180
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res180)
                elif m == m270:
                    mIndex = 270
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res270)
                elif m == m45:
                    mIndex = 45
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res45)
                elif m == m135:
                    mIndex = 135
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res135)
                elif m == m225:
                    mIndex = 225
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res225)
                elif m == m315:
                    mIndex = 315
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res315)
            elif templateNum == 4:
                res = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :], template,
                                        cv2.TM_CCOEFF_NORMED)
                res90 = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :],
                                          template90,
                                          cv2.TM_CCOEFF_NORMED)
                res180 = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :],
                                           template180,
                                           cv2.TM_CCOEFF_NORMED)
                res270 = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :],
                                           template270,
                                           cv2.TM_CCOEFF_NORMED)
                m0 = np.max(res)
                m90 = np.max(res90)
                m180 = np.max(res180)
                m270 = np.max(res270)
                m = max(m0, m90, m180, m270)
                if m == m0:
                    mIndex = 0
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
                elif m == m90:
                    mIndex = 90
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res90)
                elif m == m180:
                    mIndex = 180
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res180)
                elif m == m270:
                    mIndex = 270
                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res270)
        else:
            res = cv2.matchTemplate(frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :], template,
                                    cv2.TM_CCOEFF_NORMED)
            min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
        window = frame[tlp[count][1]:rbp[count][1], tlp[count][0]:rbp[count][0], :]
        cv2.imshow("Window", window)
        # top_left坐標(biāo)順序(水平,豎直)(→,↓)
        top_left = (max_loc[0] + tlp[count][0], max_loc[1] + tlp[count][1])
        bottom_right = (top_left[0] + w, top_left[1] + h)
        center_point = ((top_left[0] + bottom_right[0]) / 2, (top_left[1] + bottom_right[1]) / 2)
        if trackPoints.__len__() == 0:
            # 計算待選窗口左上角點坐標(biāo)
            tlx = top_left[0] - d
            tly = top_left[1] - d
            # 判斷是否越界,越界則設(shè)置為0
            if tlx < 0:
                tlx = 0
            if tly < 0:
                tly = 0
            range_tl = (tlx, tly)
            # 計算待選窗口右下角點坐標(biāo)
            rbx = top_left[0] + w + d
            rby = top_left[1] + h + d
            # 判斷是否越界,越界設(shè)置為視頻長寬最大值
            if rbx > video_w:
                rbx = video_w
            if rby > video_h:
                rby = video_h
            range_rb = (rbx, rby)
            # 將待選窗口左上角點坐標(biāo)和右下角點坐標(biāo)依次添加到列表中
            tlp.append(range_tl)
            rbp.append(range_rb)
            # 將目標(biāo)區(qū)域的左上角點、中心點、右下角點坐標(biāo)依次加入列表
            trackPoints.append(top_left)
            bottom_right_points.append(bottom_right)
            center_points.append(center_point)
            cv2.circle(track, center_point, 2, (0, 0, 255), -1)
        else:
            # 加入運(yùn)動連續(xù)性約束,若相鄰軌跡點距離相差大于閾值,則認(rèn)為錯誤
            distance = abs(trackPoints[-1][0] - top_left[0]) + abs(trackPoints[-1][1] - top_left[1])
            if distance > dis_thresh:
                print '100%'
                break
            else:
                # 計算待選窗口左上角點坐標(biāo)
                tlx = top_left[0] - d
                tly = top_left[1] - d
                # 判斷是否越界,越界則設(shè)置為0
                if tlx < 0:
                    tlx = 0
                if tly < 0:
                    tly = 0
                range_tl = (tlx, tly)
                # 計算待選窗口右下角點坐標(biāo)
                rbx = top_left[0] + w + d
                rby = top_left[1] + h + d
                # 判斷是否越界,越界設(shè)置為視頻長寬最大值
                if rbx > video_w:
                    rbx = video_w
                if rby > video_h:
                    rby = video_h
                range_rb = (rbx, rby)
                # 將待選窗口左上角點坐標(biāo)和右下角點坐標(biāo)依次添加到列表中
                tlp.append(range_tl)
                rbp.append(range_rb)
                # 將目標(biāo)區(qū)域的左上角點、中心點、右下角點坐標(biāo)依次加入列表
                trackPoints.append(top_left)
                bottom_right_points.append(bottom_right)
                # 判斷是否采用均值平滑
                if isSmooth:
                    # 采用均值平滑,平滑軌跡
                    center_point = ((center_point[0] + center_points[-1][0]) / 2,
                                    (center_point[1] + center_points[-1][1]) / 2)
                center_points.append(center_point)
                # 繪制目標(biāo)識別框
                cv2.rectangle(frame,
                              (center_point[0] - offset, center_point[1] - offset),
                              (center_point[0] + offset, center_point[1] + offset),
                              color, 2)
                # 繪制運(yùn)動軌跡
                cv2.line(track, center_points[-2], center_points[-1], (255, 255, 255), 1)
                # 計算速度
                Vs.append(calcVelocity(center_points[-2][0],
                                       center_points[-1][0],
                                       center_points[-2][1],
                                       center_points[-1][1],
                                       resolution,
                                       waitTime))
        # 輸出目標(biāo)、軌跡視頻
        out.write(frame)
        out2.write(track)
        count += 1
        print round((count * 1.0 / total) * 100, 2), '%'
        # 顯示結(jié)果
        cv2.imshow("Tr", track)
        cv2.imshow("Fr", frame)
        # 退出控制
        k = cv2.waitKey(waitTime) & 0xFF
        if k == 27:
            break
# 打印軌跡坐標(biāo)
print trackPoints
print '相鄰幀距離閾值:', dis_thresh
print '灰度閾值:', gray_thresh
print '模板縮放因子:', template_factor
print '識別框縮放因子:', scale_factor
# 輸出中心點軌跡
output = open(out_path3, 'w')
for item in center_points:
    output.write(item.__str__() + "\n")
# 輸出各幀速度
output2 = open(out_path4, 'w')
for item in Vs:
    output2.write(item.__str__() + "\n")
# 釋放對象
cap.release()
out.release()
out2.release()
output.close()
output2.close()

在代碼中主要做了如下改進(jìn):

1.增加多模板匹配機(jī)制

為了能精確地檢測物體的旋轉(zhuǎn),引入多模板匹配。在代碼中有4、8、16不同數(shù)量的模式可選。模板越多,對于旋轉(zhuǎn)的識別越精確。 下圖匹配模板數(shù)分別是1、4、8、16。

可以看到,單模版匹配已經(jīng)無法正常識別跟蹤了。模板數(shù)為4時,會有少量跟蹤錯誤。當(dāng)模板數(shù)為8和16時,跟蹤的軌跡就相對精確了。 下圖是采用8模板和單模板匹配的軌跡比較,可以看到,利用多模板匹配,可以較好識別旋轉(zhuǎn)物體。 白色為單模版匹配軌跡,紅色為多模板匹配軌跡。

同時考慮到衛(wèi)星視頻動目標(biāo)一般運(yùn)動形式是平移和旋轉(zhuǎn),沒有縮放。所以經(jīng)過優(yōu)化的算法可以滿足大部分需求。

2.增加軌跡平滑

通過對軌跡列表中最后兩個點求均值作為最終的軌跡點,可以對提取的軌跡進(jìn)行一定程度的平滑。

三、測試對比

下圖是模擬飛機(jī)曲線飛行的視頻。對其進(jìn)行目標(biāo)識別和軌跡提取后如下。

對應(yīng)的飛行軌跡如下。

可以看到,相較于單模版匹配,能較好地提取運(yùn)動目標(biāo)和軌跡。而采用之前的單模版匹配算法,經(jīng)過測試在剛轉(zhuǎn)彎時就跟丟了,如下。

到此這篇關(guān)于Python實現(xiàn)視頻目標(biāo)檢測與軌跡跟蹤流程詳解的文章就介紹到這了,更多相關(guān)Python視頻目標(biāo)檢測內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!

相關(guān)文章

  • 詳解python里的命名規(guī)范

    詳解python里的命名規(guī)范

    這篇文章主要介紹了詳解python里的命名規(guī)范,命名應(yīng)當(dāng)盡量使用全拼寫的單詞,縮寫的情況文章中也給大家提到,需要的朋友參考下吧
    2018-07-07
  • Python中使用支持向量機(jī)SVM實踐

    Python中使用支持向量機(jī)SVM實踐

    這篇文章主要為大家詳細(xì)介紹了Python中使用支持向量機(jī)SVM實踐,具有一定的參考價值,感興趣的小伙伴們可以參考一下
    2017-12-12
  • python GUI庫圖形界面開發(fā)之PyQt5輸入對話框QInputDialog詳細(xì)使用方法與實例

    python GUI庫圖形界面開發(fā)之PyQt5輸入對話框QInputDialog詳細(xì)使用方法與實例

    這篇文章主要介紹了python GUI庫圖形界面開發(fā)之PyQt5輸入對話框QInputDialog詳細(xì)使用方法與實例,需要的朋友可以參考下
    2020-02-02
  • Python?3.x踩坑實戰(zhàn)匯總

    Python?3.x踩坑實戰(zhàn)匯總

    因項目需要最近接手維護(hù)后續(xù)python程序并做后續(xù)開發(fā),下面這篇文章主要給大家介紹了關(guān)于Python?3.x踩坑的相關(guān)資料,文中通過實例代碼介紹的非常詳細(xì),需要的朋友可以參考下
    2022-03-03
  • Python用yield from實現(xiàn)異步協(xié)程爬蟲的實踐

    Python用yield from實現(xiàn)異步協(xié)程爬蟲的實踐

    本文主要介紹了Python用yield from實現(xiàn)異步協(xié)程爬蟲的實踐,文中通過示例代碼介紹的非常詳細(xì),對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友們下面隨著小編來一起學(xué)習(xí)學(xué)習(xí)吧
    2023-01-01
  • Python?NumPy科學(xué)計算庫的高級應(yīng)用

    Python?NumPy科學(xué)計算庫的高級應(yīng)用

    這篇文章主要為大家介紹了Python?NumPy科學(xué)計算庫的高級應(yīng)用深入詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪
    2023-07-07
  • Python實現(xiàn)微信翻譯機(jī)器人的方法

    Python實現(xiàn)微信翻譯機(jī)器人的方法

    這篇文章主要介紹了Python實現(xiàn)微信翻譯機(jī)器人的方法,本文通過實例代碼給大家介紹的非常詳細(xì),具有一定的參考借鑒價值,需要的朋友可以參考下
    2019-08-08
  • python實現(xiàn)單向鏈表詳解

    python實現(xiàn)單向鏈表詳解

    這篇文章主要介紹了python實現(xiàn)單向鏈表詳解,分享了相關(guān)代碼示例,每一步操作前都有簡單分析,小編覺得還是挺不錯的,具有一定借鑒價值,需要的朋友可以參考下
    2018-02-02
  • python實現(xiàn)電腦自動關(guān)機(jī)

    python實現(xiàn)電腦自動關(guān)機(jī)

    這篇文章主要為大家詳細(xì)介紹了如何利用python實現(xiàn)電腦自動關(guān)機(jī),具有一定的參考價值,感興趣的小伙伴們可以參考一下
    2018-06-06
  • 利用Python統(tǒng)計Jira數(shù)據(jù)并可視化

    利用Python統(tǒng)計Jira數(shù)據(jù)并可視化

    目前公司使用 Jira 作為項目管理工具,在每一次迭代完成后的復(fù)盤會上,我們都需要針對本次迭代的 Bug 進(jìn)行數(shù)據(jù)統(tǒng)計,以幫助管理層能更直觀的了解研發(fā)的代碼質(zhì)量。本篇文章將介紹如何利用統(tǒng)計 Jira 數(shù)據(jù),并進(jìn)行可視化,需要的可以參考一下
    2022-07-07

最新評論