python 3利用Dlib 19.7實現(xiàn)攝像頭人臉檢測特征點標(biāo)定
Python 3 利用 Dlib 19.7 實現(xiàn)攝像頭人臉檢測特征點標(biāo)定
0.引言
利用python開發(fā),借助Dlib庫捕獲攝像頭中的人臉,進(jìn)行實時特征點標(biāo)定;
圖1 工程效果示例(gif)
圖2 工程效果示例(靜態(tài)圖片)
(實現(xiàn)比較簡單,代碼量也比較少,適合入門或者興趣學(xué)習(xí)。)
1.開發(fā)環(huán)境
python: 3.6.3
dlib: 19.7
OpenCv, numpy
import dlib # 人臉識別的庫dlib import numpy as np # 數(shù)據(jù)處理的庫numpy import cv2 # 圖像處理的庫OpenCv
2.源碼介紹
其實實現(xiàn)很簡單,主要分為兩個部分:攝像頭調(diào)用+人臉特征點標(biāo)定
2.1 攝像頭調(diào)用
介紹下opencv中攝像頭的調(diào)用方法;
利用 cap = cv2.VideoCapture(0) 創(chuàng)建一個對象;
(具體可以參考官方文檔)
# 2018-2-26 # By TimeStamp # cnblogs: http://www.cnblogs.com/AdaminXie """ cv2.VideoCapture(), 創(chuàng)建cv2攝像頭對象/ open the default camera Python: cv2.VideoCapture() → <VideoCapture object> Python: cv2.VideoCapture(filename) → <VideoCapture object> filename – name of the opened video file (eg. video.avi) or image sequence (eg. img_%02d.jpg, which will read samples like img_00.jpg, img_01.jpg, img_02.jpg, ...) Python: cv2.VideoCapture(device) → <VideoCapture object> device – id of the opened video capturing device (i.e. a camera index). If there is a single camera connected, just pass 0. """ cap = cv2.VideoCapture(0) """ cv2.VideoCapture.set(propId, value),設(shè)置視頻參數(shù); propId: CV_CAP_PROP_POS_MSEC Current position of the video file in milliseconds. CV_CAP_PROP_POS_FRAMES 0-based index of the frame to be decoded/captured next. CV_CAP_PROP_POS_AVI_RATIO Relative position of the video file: 0 - start of the film, 1 - end of the film. CV_CAP_PROP_FRAME_WIDTH Width of the frames in the video stream. CV_CAP_PROP_FRAME_HEIGHT Height of the frames in the video stream. CV_CAP_PROP_FPS Frame rate. CV_CAP_PROP_FOURCC 4-character code of codec. CV_CAP_PROP_FRAME_COUNT Number of frames in the video file. CV_CAP_PROP_FORMAT Format of the Mat objects returned by retrieve() . CV_CAP_PROP_MODE Backend-specific value indicating the current capture mode. CV_CAP_PROP_BRIGHTNESS Brightness of the image (only for cameras). CV_CAP_PROP_CONTRAST Contrast of the image (only for cameras). CV_CAP_PROP_SATURATION Saturation of the image (only for cameras). CV_CAP_PROP_HUE Hue of the image (only for cameras). CV_CAP_PROP_GAIN Gain of the image (only for cameras). CV_CAP_PROP_EXPOSURE Exposure (only for cameras). CV_CAP_PROP_CONVERT_RGB Boolean flags indicating whether images should be converted to RGB. CV_CAP_PROP_WHITE_BALANCE_U The U value of the whitebalance setting (note: only supported by DC1394 v 2.x backend currently) CV_CAP_PROP_WHITE_BALANCE_V The V value of the whitebalance setting (note: only supported by DC1394 v 2.x backend currently) CV_CAP_PROP_RECTIFICATION Rectification flag for stereo cameras (note: only supported by DC1394 v 2.x backend currently) CV_CAP_PROP_ISO_SPEED The ISO speed of the camera (note: only supported by DC1394 v 2.x backend currently) CV_CAP_PROP_BUFFERSIZE Amount of frames stored in internal buffer memory (note: only supported by DC1394 v 2.x backend currently) value: 設(shè)置的參數(shù)值/ Value of the property """ cap.set(3, 480) """ cv2.VideoCapture.isOpened(), 檢查攝像頭初始化是否成功 / check if we succeeded 返回true或false """ cap.isOpened() """ cv2.VideoCapture.read([imgage]) -> retval,image, 讀取視頻 / Grabs, decodes and returns the next video frame 返回兩個值: 一個是布爾值true/false,用來判斷讀取視頻是否成功/是否到視頻末尾 圖像對象,圖像的三維矩陣 """ flag, im_rd = cap.read()
2.2 人臉特征點標(biāo)定
調(diào)用預(yù)測器“shape_predictor_68_face_landmarks.dat”進(jìn)行68點標(biāo)定,這是dlib訓(xùn)練好的模型,可以直接調(diào)用進(jìn)行人臉68個人臉特征點的標(biāo)定;
具體可以參考我的另一篇博客(python3利用Dlib19.7實現(xiàn)人臉68個特征點標(biāo)定);
2.3 源碼
實現(xiàn)的方法比較簡單:
利用 cv2.VideoCapture() 創(chuàng)建攝像頭對象,然后利用 flag, im_rd = cv2.VideoCapture.read() 讀取攝像頭視頻,im_rd就是視頻中的一幀幀圖像;
然后就類似于單張圖像進(jìn)行人臉檢測,對這一幀幀的圖像im_rd利用dlib進(jìn)行特征點標(biāo)定,然后繪制特征點;
你可以按下s鍵來獲取當(dāng)前截圖,或者按下q鍵來退出攝像頭;
# 2018-2-26
# By TimeStamp # cnblogs: http://www.cnblogs.com/AdaminXie # github: https://github.com/coneypo/Dlib_face_detection_from_camera import dlib #人臉識別的庫dlib import numpy as np #數(shù)據(jù)處理的庫numpy import cv2 #圖像處理的庫OpenCv # dlib預(yù)測器 detector = dlib.get_frontal_face_detector() predictor = dlib.shape_predictor('shape_predictor_68_face_landmarks.dat') # 創(chuàng)建cv2攝像頭對象 cap = cv2.VideoCapture(0) # cap.set(propId, value) # 設(shè)置視頻參數(shù),propId設(shè)置的視頻參數(shù),value設(shè)置的參數(shù)值 cap.set(3, 480) # 截圖screenshoot的計數(shù)器 cnt = 0 # cap.isOpened() 返回true/false 檢查初始化是否成功 while(cap.isOpened()): # cap.read() # 返回兩個值: # 一個布爾值true/false,用來判斷讀取視頻是否成功/是否到視頻末尾 # 圖像對象,圖像的三維矩陣 flag, im_rd = cap.read() # 每幀數(shù)據(jù)延時1ms,延時為0讀取的是靜態(tài)幀 k = cv2.waitKey(1) # 取灰度 img_gray = cv2.cvtColor(im_rd, cv2.COLOR_RGB2GRAY) # 人臉數(shù)rects rects = detector(img_gray, 0) #print(len(rects)) # 待會要寫的字體 font = cv2.FONT_HERSHEY_SIMPLEX # 標(biāo)68個點 if(len(rects)!=0): # 檢測到人臉 for i in range(len(rects)): landmarks = np.matrix([[p.x, p.y] for p in predictor(im_rd, rects[i]).parts()]) for idx, point in enumerate(landmarks): # 68點的坐標(biāo) pos = (point[0, 0], point[0, 1]) # 利用cv2.circle給每個特征點畫一個圈,共68個 cv2.circle(im_rd, pos, 2, color=(0, 255, 0)) # 利用cv2.putText輸出1-68 cv2.putText(im_rd, str(idx + 1), pos, font, 0.2, (0, 0, 255), 1, cv2.LINE_AA) cv2.putText(im_rd, "faces: "+str(len(rects)), (20,50), font, 1, (0, 0, 255), 1, cv2.LINE_AA) else: # 沒有檢測到人臉 cv2.putText(im_rd, "no face", (20, 50), font, 1, (0, 0, 255), 1, cv2.LINE_AA) # 添加說明 im_rd = cv2.putText(im_rd, "s: screenshot", (20, 400), font, 0.8, (255, 255, 255), 1, cv2.LINE_AA) im_rd = cv2.putText(im_rd, "q: quit", (20, 450), font, 0.8, (255, 255, 255), 1, cv2.LINE_AA) # 按下s鍵保存 if (k == ord('s')): cnt+=1 cv2.imwrite("screenshoot"+str(cnt)+".jpg", im_rd) # 按下q鍵退出 if(k==ord('q')): break # 窗口顯示 cv2.imshow("camera", im_rd) # 釋放攝像頭 cap.release() # 刪除建立的窗口 cv2.destroyAllWindows()
如果對您有幫助,歡迎在GitHub上star本項目。
以上就是本文的全部內(nèi)容,希望對大家的學(xué)習(xí)有所幫助,也希望大家多多支持腳本之家。
相關(guān)文章
python爬蟲流程基礎(chǔ)示例零基礎(chǔ)學(xué)習(xí)
這篇文章主要為大家介紹了python爬蟲流程基礎(chǔ)示例零基礎(chǔ)學(xué)習(xí),我們將討論 Python 網(wǎng)絡(luò)編程中的爬蟲基礎(chǔ),作為一個完全的初學(xué)者,你將學(xué)習(xí)到爬蟲的基本概念、常用庫以及如何編寫一個簡單的爬蟲2023-06-06Python使用metaclass實現(xiàn)Singleton模式的方法
這篇文章主要介紹了Python使用metaclass實現(xiàn)Singleton模式的方法,實例分析了Python基于metaclass實現(xiàn)單例模式的相關(guān)技巧,具有一定參考借鑒價值,需要的朋友可以參考下2015-05-05