Python?OpenCV實(shí)現(xiàn)姿態(tài)識(shí)別的詳細(xì)代碼
前言
想要使用攝像頭實(shí)現(xiàn)一個(gè)多人姿態(tài)識(shí)別
環(huán)境安裝
下載并安裝 Anaconda
官網(wǎng)連接 https://anaconda.cloud/installers
安裝 Jupyter Notebook
檢查Jupyter Notebook是否安裝
Tip:這里涉及到一個(gè)切換Jupyter Notebook內(nèi)核的問題,在我這篇文章中有提到
AnacondaNavigator Jupyter Notebook更換Python內(nèi)核http://www.dbjr.com.cn/article/238496.htm
生成Jupyter Notebook項(xiàng)目目錄
打開Anaconda Prompt
切換到項(xiàng)目目錄
輸入Jupyter notebook
在瀏覽器中打開 Jupyter Notebook
并創(chuàng)建新的記事本
下載訓(xùn)練庫(kù)
圖片以及訓(xùn)練庫(kù)都在下方鏈接
https://github.com/quanhua92/human-pose-estimation-opencv
將圖片和訓(xùn)練好的模型放到項(xiàng)目路徑中graph_opt.pb
為訓(xùn)練好的模型
單張圖片識(shí)別
導(dǎo)入庫(kù)
import cv2 as cv import os import matplotlib.pyplot as plt
加載訓(xùn)練模型
net=cv.dnn.readNetFromTensorflow("graph_opt.pb")
初始化
inWidth=368 inHeight=368 thr=0.2 BODY_PARTS = { "Nose": 0, "Neck": 1, "RShoulder": 2, "RElbow": 3, "RWrist": 4, "LShoulder": 5, "LElbow": 6, "LWrist": 7, "RHip": 8, "RKnee": 9, "RAnkle": 10, "LHip": 11, "LKnee": 12, "LAnkle": 13, "REye": 14, "LEye": 15, "REar": 16, "LEar": 17, "Background": 18 } POSE_PAIRS = [ ["Neck", "RShoulder"], ["Neck", "LShoulder"], ["RShoulder", "RElbow"], ["RElbow", "RWrist"], ["LShoulder", "LElbow"], ["LElbow", "LWrist"], ["Neck", "RHip"], ["RHip", "RKnee"], ["RKnee", "RAnkle"], ["Neck", "LHip"], ["LHip", "LKnee"], ["LKnee", "LAnkle"], ["Neck", "Nose"], ["Nose", "REye"], ["REye", "REar"], ["Nose", "LEye"], ["LEye", "LEar"] ]
載入圖片
img = cv.imread("image.jpg")
顯示圖片
plt.imshow(img)
調(diào)整圖片顏色
plt.imshow(cv.cvtColor(img,cv.COLOR_BGR2RGB))
姿態(tài)識(shí)別
def pose_estimation(frame): frameWidth=frame.shape[1] frameHeight=frame.shape[0] net.setInput(cv.dnn.blobFromImage(frame, 1.0, (inWidth, inHeight), (127.5, 127.5, 127.5), swapRB=True, crop=False)) out = net.forward() out = out[:, :19, :, :] # MobileNet output [1, 57, -1, -1], we only need the first 19 elements assert(len(BODY_PARTS) == out.shape[1]) points = [] for i in range(len(BODY_PARTS)): # Slice heatmap of corresponging body's part. heatMap = out[0, i, :, :] # Originally, we try to find all the local maximums. To simplify a sample # we just find a global one. However only a single pose at the same time # could be detected this way. _, conf, _, point = cv.minMaxLoc(heatMap) x = (frameWidth * point[0]) / out.shape[3] y = (frameHeight * point[1]) / out.shape[2] # Add a point if it's confidence is higher than threshold. points.append((int(x), int(y)) if conf > thr else None) for pair in POSE_PAIRS: partFrom = pair[0] partTo = pair[1] assert(partFrom in BODY_PARTS) assert(partTo in BODY_PARTS) idFrom = BODY_PARTS[partFrom] idTo = BODY_PARTS[partTo] # 繪制線條 if points[idFrom] and points[idTo]: cv.line(frame, points[idFrom], points[idTo], (0, 255, 0), 3) cv.ellipse(frame, points[idFrom], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED) cv.ellipse(frame, points[idTo], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED) t, _ = net.getPerfProfile() freq = cv.getTickFrequency() / 1000 cv.putText(frame, '%.2fms' % (t / freq), (10, 20), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0)) return frame # 處理圖片 estimated_image=pose_estimation(img) # 顯示圖片 plt.imshow(cv.cvtColor(estimated_image,cv.COLOR_BGR2RGB))
視頻識(shí)別
Tip:與上面圖片識(shí)別代碼是銜接的
視頻來自互聯(lián)網(wǎng),侵刪
cap = cv.VideoCapture('testvideo.mp4') cap.set(3,800) cap.set(4,800) if not cap.isOpened(): cap=cv.VideoCapture(0) raise IOError("Cannot open vide") while cv.waitKey(1) < 0: hasFrame,frame=cap.read() if not hasFrame: cv.waitKey() break frameWidth=frame.shape[1] frameHeight=frame.shape[0] net.setInput(cv.dnn.blobFromImage(frame, 1.0, (inWidth, inHeight), (127.5, 127.5, 127.5), swapRB=True, crop=False)) out = net.forward() out = out[:, :19, :, :] # MobileNet output [1, 57, -1, -1], we only need the first 19 elements assert(len(BODY_PARTS) == out.shape[1]) points = [] for i in range(len(BODY_PARTS)): # Slice heatmap of corresponging body's part. heatMap = out[0, i, :, :] # Originally, we try to find all the local maximums. To simplify a sample # we just find a global one. However only a single pose at the same time # could be detected this way. _, conf, _, point = cv.minMaxLoc(heatMap) x = (frameWidth * point[0]) / out.shape[3] y = (frameHeight * point[1]) / out.shape[2] # Add a point if it's confidence is higher than threshold. points.append((int(x), int(y)) if conf > thr else None) for pair in POSE_PAIRS: partFrom = pair[0] partTo = pair[1] assert(partFrom in BODY_PARTS) assert(partTo in BODY_PARTS) idFrom = BODY_PARTS[partFrom] idTo = BODY_PARTS[partTo] if points[idFrom] and points[idTo]: cv.line(frame, points[idFrom], points[idTo], (0, 255, 0), 3) cv.ellipse(frame, points[idFrom], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED) cv.ellipse(frame, points[idTo], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED) t, _ = net.getPerfProfile() freq = cv.getTickFrequency() / 1000 cv.putText(frame, '%.2fms' % (t / freq), (10, 20), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0)) cv.imshow('Video Tutorial',frame)
實(shí)時(shí)攝像頭識(shí)別
Tip:與上面圖片識(shí)別代碼是銜接的
cap = cv.VideoCapture(0) cap.set(cv.CAP_PROP_FPS,10) cap.set(3,800) cap.set(4,800) if not cap.isOpened(): cap=cv.VideoCapture(0) raise IOError("Cannot open vide") while cv.waitKey(1) < 0: hasFrame,frame=cap.read() if not hasFrame: cv.waitKey() break frameWidth=frame.shape[1] frameHeight=frame.shape[0] net.setInput(cv.dnn.blobFromImage(frame, 1.0, (inWidth, inHeight), (127.5, 127.5, 127.5), swapRB=True, crop=False)) out = net.forward() out = out[:, :19, :, :] # MobileNet output [1, 57, -1, -1], we only need the first 19 elements assert(len(BODY_PARTS) == out.shape[1]) points = [] for i in range(len(BODY_PARTS)): # Slice heatmap of corresponging body's part. heatMap = out[0, i, :, :] # Originally, we try to find all the local maximums. To simplify a sample # we just find a global one. However only a single pose at the same time # could be detected this way. _, conf, _, point = cv.minMaxLoc(heatMap) x = (frameWidth * point[0]) / out.shape[3] y = (frameHeight * point[1]) / out.shape[2] # Add a point if it's confidence is higher than threshold. points.append((int(x), int(y)) if conf > thr else None) for pair in POSE_PAIRS: partFrom = pair[0] partTo = pair[1] assert(partFrom in BODY_PARTS) assert(partTo in BODY_PARTS) idFrom = BODY_PARTS[partFrom] idTo = BODY_PARTS[partTo] if points[idFrom] and points[idTo]: cv.line(frame, points[idFrom], points[idTo], (0, 255, 0), 3) cv.ellipse(frame, points[idFrom], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED) cv.ellipse(frame, points[idTo], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED) t, _ = net.getPerfProfile() freq = cv.getTickFrequency() / 1000 cv.putText(frame, '%.2fms' % (t / freq), (10, 20), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0)) cv.imshow('Video Tutorial',frame)
參考
DeepLearning_by_PhDScholar
Human Pose Estimation using opencv | python | OpenPose | stepwise implementation for beginners
https://www.youtube.com/watch?v=9jQGsUidKHs
到此這篇關(guān)于Python OpenCV實(shí)現(xiàn)姿態(tài)識(shí)別的文章就介紹到這了,更多相關(guān)Python姿態(tài)識(shí)別內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!
相關(guān)文章
Python中的文件和目錄操作實(shí)現(xiàn)代碼
對(duì)于文件和目錄的處理,雖然可以通過操作系統(tǒng)命令來完成,但是Python語言為了便于開發(fā)人員以編程的方式處理相關(guān)工作,提供了許多處理文件和目錄的內(nèi)置函數(shù)。重要的是,這些函數(shù)無論是在Unix、Windows還是Macintosh平臺(tái)上,它們的使用方式是完全一致的。2011-03-03Numpy 多維數(shù)據(jù)數(shù)組的實(shí)現(xiàn)
這篇文章主要介紹了Numpy 多維數(shù)據(jù)數(shù)組的實(shí)現(xiàn),文中通過示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來一起學(xué)習(xí)學(xué)習(xí)吧2020-06-06簡(jiǎn)單分析python的類變量、實(shí)例變量
在本篇文章中小編給大家整理的是關(guān)于python類變量、實(shí)例變量的知識(shí)點(diǎn)內(nèi)容,有需要的朋友們可以學(xué)習(xí)下。2019-08-08