C++?OpenCV實現(xiàn)之實現(xiàn)紅綠燈識別
前言
本文以實現(xiàn)行車過程當(dāng)中的紅綠燈識別為目標(biāo),核心的內(nèi)容包括:OpenCV輪廓識別原理以及OpenCV紅綠燈識別的實現(xiàn)具體步驟
一、輪廓識別相關(guān)原理

什么是輪廓檢測
目前輪廓檢測方法有兩類,一類是利用傳統(tǒng)的邊緣檢測算子檢測目標(biāo)輪廓,另一類是從人類視覺系統(tǒng)中提取可以使用的數(shù)學(xué)模型完成目標(biāo)輪廓檢測

輪廓提取函數(shù)findContours
函數(shù)原型:
findContours( InputOutputArray image, OutputArrayOfArrays contours,
OutputArray hierarchy, int mode,
int method, Point offset=Point());
參數(shù):
image:單通道圖像矩陣,可以是灰度圖,但更常用的是二值圖像,一般是經(jīng)過Canny、拉普拉斯等邊緣檢測算子處理過的二值圖像
contours: 定義為“vector<vector<Point>> contours”,是一個向量,并且是一個雙重向量,向量內(nèi)每個元素保存了一組由連續(xù)的Point點(diǎn)構(gòu)成的點(diǎn)的集合的向量,每一組Point點(diǎn)集就是一個輪廓。有多少輪廓,向量contours就有多少元素
hierarchy: 也是一個向量,向量內(nèi)每個元素保存了一個包含4個int整型的數(shù)組。向量內(nèi)的元素和輪廓向量contours內(nèi)的元素是一一對應(yīng)的,向量的容量相同
int mode:
取值一:CV_CHAIN_APPROX_NONE 保存物體邊界上所有連續(xù)的輪廓點(diǎn)到 contours向量內(nèi)
取值二:CV_CHAIN_APPROX_SIMPLE 僅保存輪廓的拐點(diǎn)信息,把所有輪廓拐點(diǎn)處的點(diǎn)保存入contours向量內(nèi),拐點(diǎn)與拐點(diǎn)之間直線段上的信息點(diǎn)不予保留
取值三和四:CV_CHAIN_APPROX_TC89_L1,CV_CHAIN_APPROX_TC89_KCOS使用teh-Chinl chain 近似算法
Point: Point偏移量,所有的輪廓信息相對于原始圖像對應(yīng)點(diǎn)的偏移量,相當(dāng)于在每一個檢測出的輪廓點(diǎn)上加上該偏移量,并且Point還可以是負(fù)值!
參數(shù)詳解引用出處:findContours函數(shù)參數(shù)詳解
二、案例實現(xiàn)
這是本案例所用到的素材,如下圖所示:

PS:視頻的效果比較好,如果方便的話可以自行外出拍攝取材
Step1:初始化配置
做好準(zhǔn)備工作,創(chuàng)建我們需要的內(nèi)容,設(shè)置亮度參數(shù),以及我們導(dǎo)入的視頻路徑
int redCount = 0;
int greenCount = 0;
Mat frame;
Mat img;
Mat imgYCrCb;
Mat imgGreen;
Mat imgRed;
// 亮度參數(shù)
double a = 0.3;
double b = (1 - a) * 125;
VideoCapture capture("C:/Users/86177/Desktop/image/123.mp4");//導(dǎo)入視頻的路徑
if (!capture.isOpened())
{
cout << "Start device failed!\n" << endl;//啟動設(shè)備失敗!
return -1;
}Step2:進(jìn)行幀處理
調(diào)整視頻亮度,分解YCrCb的三個成分,拆分紅色和綠色,方便對紅綠兩種顏色進(jìn)行特征提取
// 幀處理
while (1)
{
capture >> frame;
//調(diào)整亮度
frame.convertTo(img, img.type(), a, b);
//轉(zhuǎn)換為YCrCb顏色空間
cvtColor(img, imgYCrCb, CV_BGR2YCrCb);
imgRed.create(imgYCrCb.rows, imgYCrCb.cols, CV_8UC1);
imgGreen.create(imgYCrCb.rows, imgYCrCb.cols, CV_8UC1);
//分解YCrCb的三個成分
vector<Mat> planes;
split(imgYCrCb, planes);
// 遍歷以根據(jù)Cr分量拆分紅色和綠色
MatIterator_<uchar> it_Cr = planes[1].begin<uchar>(),
it_Cr_end = planes[1].end<uchar>();
MatIterator_<uchar> it_Red = imgRed.begin<uchar>();
MatIterator_<uchar> it_Green = imgGreen.begin<uchar>();
for (; it_Cr != it_Cr_end; ++it_Cr, ++it_Red, ++it_Green)
{
// RED, 145<Cr<470 紅色
if (*it_Cr > 145 && *it_Cr < 470)
*it_Red = 255;
else
*it_Red = 0;
// GREEN 95<Cr<110 綠色
if (*it_Cr > 95 && *it_Cr < 110)
*it_Green = 255;
else
*it_Green = 0;
}Step3:膨脹腐蝕處理
去除其他噪點(diǎn),提高我們需要的紅綠燈的特征
//膨脹和腐蝕
dilate(imgRed, imgRed, Mat(15, 15, CV_8UC1), Point(-1, -1));
erode(imgRed, imgRed, Mat(1, 1, CV_8UC1), Point(-1, -1));
dilate(imgGreen, imgGreen, Mat(15, 15, CV_8UC1), Point(-1, -1));
erode(imgGreen, imgGreen, Mat(1, 1, CV_8UC1), Point(-1, -1));
redCount = processImgR(imgRed);
greenCount = processImgG(imgGreen);
cout << "red:" << redCount << "; " << "green:" << greenCount << endl;Step4:紅綠燈提示判斷
加入紅綠燈識別結(jié)果顯示,方便我們查看效果
if(redCount == 0 && greenCount == 0)
{
cv::putText(frame, "lights out", Point(40, 150), cv::FONT_HERSHEY_SIMPLEX, 2, cv::Scalar(255, 255, 255), 8, 8, 0);
}else if(redCount > greenCount)
{
cv::putText(frame, "red light", Point(40, 150), cv::FONT_HERSHEY_SIMPLEX, 2, cv::Scalar(0, 0, 255), 8, 8, 0);
}else{
cv::putText(frame, "green light", Point(40, 150), cv::FONT_HERSHEY_SIMPLEX, 2, cv::Scalar(0, 255, 0), 8, 8, 0);
}

Step5:輪廓提取
分別對紅燈和綠燈進(jìn)行輪廓特征提取,提高辨識度
int processImgR(Mat src)
{
Mat tmp;
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
vector<Point> hull;
CvPoint2D32f tempNode;
CvMemStorage* storage = cvCreateMemStorage();
CvSeq* pointSeq = cvCreateSeq(CV_32FC2, sizeof(CvSeq), sizeof(CvPoint2D32f), storage);
Rect* trackBox;
Rect* result;
int resultNum = 0;
int area = 0;
src.copyTo(tmp);
//提取輪廓
findContours(tmp, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
if (contours.size() > 0)
{
trackBox = new Rect[contours.size()];
result = new Rect[contours.size()];
//確定要跟蹤的區(qū)域
for (int i = 0; i < contours.size(); i++)
{
cvClearSeq(pointSeq);
// 獲取凸包的點(diǎn)集
convexHull(Mat(contours[i]), hull, true);
int hullcount = (int)hull.size();
// 凸包的保存點(diǎn)
for (int j = 0; j < hullcount - 1; j++)
{
tempNode.x = hull[j].x;
tempNode.y = hull[j].y;
cvSeqPush(pointSeq, &tempNode);
}
trackBox[i] = cvBoundingRect(pointSeq);
}
if (isFirstDetectedR)
{
lastTrackBoxR = new Rect[contours.size()];
for (int i = 0; i < contours.size(); i++)
lastTrackBoxR[i] = trackBox[i];
lastTrackNumR = contours.size();
isFirstDetectedR = false;
}
else
{
for (int i = 0; i < contours.size(); i++)
{
for (int j = 0; j < lastTrackNumR; j++)
{
if (isIntersected(trackBox[i], lastTrackBoxR[j]))
{
result[resultNum] = trackBox[i];
break;
}
}
resultNum++;
}
delete[] lastTrackBoxR;
lastTrackBoxR = new Rect[contours.size()];
for (int i = 0; i < contours.size(); i++)
{
lastTrackBoxR[i] = trackBox[i];
}
lastTrackNumR = contours.size();
}
delete[] trackBox;
}
else
{
isFirstDetectedR = true;
result = NULL;
}
cvReleaseMemStorage(&storage);
if (result != NULL)
{
for (int i = 0; i < resultNum; i++)
{
area += result[i].area();
}
}
delete[] result;
return area;
}
int processImgG(Mat src)
{
Mat tmp;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
vector< Point > hull;
CvPoint2D32f tempNode;
CvMemStorage* storage = cvCreateMemStorage();
CvSeq* pointSeq = cvCreateSeq(CV_32FC2, sizeof(CvSeq), sizeof(CvPoint2D32f), storage);
Rect* trackBox;
Rect* result;
int resultNum = 0;
int area = 0;
src.copyTo(tmp);
//提取輪廓
findContours(tmp, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
if (contours.size() > 0)
{
trackBox = new Rect[contours.size()];
result = new Rect[contours.size()];
// 確定要跟蹤的區(qū)域
for (int i = 0; i < contours.size(); i++)
{
cvClearSeq(pointSeq);
// 獲取凸包的點(diǎn)集
convexHull(Mat(contours[i]), hull, true);
int hullcount = (int)hull.size();
// 保存凸包的點(diǎn)
for (int j = 0; j < hullcount - 1; j++)
{
tempNode.x = hull[j].x;
tempNode.y = hull[j].y;
cvSeqPush(pointSeq, &tempNode);
}
trackBox[i] = cvBoundingRect(pointSeq);
}
if (isFirstDetectedG)
{
lastTrackBoxG = new Rect[contours.size()];
for (int i = 0; i < contours.size(); i++)
lastTrackBoxG[i] = trackBox[i];
lastTrackNumG = contours.size();
isFirstDetectedG = false;
}
else
{
for (int i = 0; i < contours.size(); i++)
{
for (int j = 0; j < lastTrackNumG; j++)
{
if (isIntersected(trackBox[i], lastTrackBoxG[j]))
{
result[resultNum] = trackBox[i];
break;
}
}
resultNum++;
}
delete[] lastTrackBoxG;
lastTrackBoxG = new Rect[contours.size()];
for (int i = 0; i < contours.size(); i++)
{
lastTrackBoxG[i] = trackBox[i];
}
lastTrackNumG = contours.size();
}
delete[] trackBox;
}
else
{
isFirstDetectedG = true;
result = NULL;
}
cvReleaseMemStorage(&storage);
if (result != NULL)
{
for (int i = 0; i < resultNum; i++)
{
area += result[i].area();
}
}
delete[] result;
return area;
}

Step6:判斷是否相交
//確定兩個矩形區(qū)域是否相交
bool isIntersected(Rect r1, Rect r2)
{
int minX = max(r1.x, r2.x);
int minY = max(r1.y, r2.y);
int maxX = min(r1.x + r1.width, r2.x + r2.width);
int maxY = min(r1.y + r1.height, r2.y + r2.height);
if (minX < maxX && minY < maxY)
return true;
else
return false;
}案例效果



完整代碼
#include "opencv2/opencv.hpp"
#include "opencv2/imgproc.hpp"
#include <windows.h>
#include <iostream>
using namespace std;
using namespace cv;
// Function headers
int processImgR(Mat);
int processImgG(Mat);
bool isIntersected(Rect, Rect);
// Global variables
bool isFirstDetectedR = true;
bool isFirstDetectedG = true;
Rect* lastTrackBoxR;
Rect* lastTrackBoxG;
int lastTrackNumR;
int lastTrackNumG;
//主函數(shù)
int main()
{
int redCount = 0;
int greenCount = 0;
Mat frame;
Mat img;
Mat imgYCrCb;
Mat imgGreen;
Mat imgRed;
// 亮度參數(shù)
double a = 0.3;
double b = (1 - a) * 125;
VideoCapture capture("C:/Users/86177/Desktop/image/123.mp4");//導(dǎo)入視頻的路徑
if (!capture.isOpened())
{
cout << "Start device failed!\n" << endl;//啟動設(shè)備失敗!
return -1;
}
// 幀處理
while (1)
{
capture >> frame;
//調(diào)整亮度
frame.convertTo(img, img.type(), a, b);
//轉(zhuǎn)換為YCrCb顏色空間
cvtColor(img, imgYCrCb, CV_BGR2YCrCb);
imgRed.create(imgYCrCb.rows, imgYCrCb.cols, CV_8UC1);
imgGreen.create(imgYCrCb.rows, imgYCrCb.cols, CV_8UC1);
//分解YCrCb的三個成分
vector<Mat> planes;
split(imgYCrCb, planes);
// 遍歷以根據(jù)Cr分量拆分紅色和綠色
MatIterator_<uchar> it_Cr = planes[1].begin<uchar>(),
it_Cr_end = planes[1].end<uchar>();
MatIterator_<uchar> it_Red = imgRed.begin<uchar>();
MatIterator_<uchar> it_Green = imgGreen.begin<uchar>();
for (; it_Cr != it_Cr_end; ++it_Cr, ++it_Red, ++it_Green)
{
// RED, 145<Cr<470 紅色
if (*it_Cr > 145 && *it_Cr < 470)
*it_Red = 255;
else
*it_Red = 0;
// GREEN 95<Cr<110 綠色
if (*it_Cr > 95 && *it_Cr < 110)
*it_Green = 255;
else
*it_Green = 0;
}
//膨脹和腐蝕
dilate(imgRed, imgRed, Mat(15, 15, CV_8UC1), Point(-1, -1));
erode(imgRed, imgRed, Mat(1, 1, CV_8UC1), Point(-1, -1));
dilate(imgGreen, imgGreen, Mat(15, 15, CV_8UC1), Point(-1, -1));
erode(imgGreen, imgGreen, Mat(1, 1, CV_8UC1), Point(-1, -1));
redCount = processImgR(imgRed);
greenCount = processImgG(imgGreen);
cout << "red:" << redCount << "; " << "green:" << greenCount << endl;
if(redCount == 0 && greenCount == 0)
{
cv::putText(frame, "lights out", Point(40, 150), cv::FONT_HERSHEY_SIMPLEX, 2, cv::Scalar(255, 255, 255), 8, 8, 0);
}else if(redCount > greenCount)
{
cv::putText(frame, "red light", Point(40, 150), cv::FONT_HERSHEY_SIMPLEX, 2, cv::Scalar(0, 0, 255), 8, 8, 0);
}else{
cv::putText(frame, "green light", Point(40, 150), cv::FONT_HERSHEY_SIMPLEX, 2, cv::Scalar(0, 255, 0), 8, 8, 0);
}
imshow("video", frame);
imshow("Red", imgRed);
imshow("Green", imgGreen);
// Handle with the keyboard input
if (cvWaitKey(20) == 'q')
break;
}
return 0;
}
int processImgR(Mat src)
{
Mat tmp;
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
vector<Point> hull;
CvPoint2D32f tempNode;
CvMemStorage* storage = cvCreateMemStorage();
CvSeq* pointSeq = cvCreateSeq(CV_32FC2, sizeof(CvSeq), sizeof(CvPoint2D32f), storage);
Rect* trackBox;
Rect* result;
int resultNum = 0;
int area = 0;
src.copyTo(tmp);
//提取輪廓
findContours(tmp, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
if (contours.size() > 0)
{
trackBox = new Rect[contours.size()];
result = new Rect[contours.size()];
//確定要跟蹤的區(qū)域
for (int i = 0; i < contours.size(); i++)
{
cvClearSeq(pointSeq);
// 獲取凸包的點(diǎn)集
convexHull(Mat(contours[i]), hull, true);
int hullcount = (int)hull.size();
// 凸包的保存點(diǎn)
for (int j = 0; j < hullcount - 1; j++)
{
tempNode.x = hull[j].x;
tempNode.y = hull[j].y;
cvSeqPush(pointSeq, &tempNode);
}
trackBox[i] = cvBoundingRect(pointSeq);
}
if (isFirstDetectedR)
{
lastTrackBoxR = new Rect[contours.size()];
for (int i = 0; i < contours.size(); i++)
lastTrackBoxR[i] = trackBox[i];
lastTrackNumR = contours.size();
isFirstDetectedR = false;
}
else
{
for (int i = 0; i < contours.size(); i++)
{
for (int j = 0; j < lastTrackNumR; j++)
{
if (isIntersected(trackBox[i], lastTrackBoxR[j]))
{
result[resultNum] = trackBox[i];
break;
}
}
resultNum++;
}
delete[] lastTrackBoxR;
lastTrackBoxR = new Rect[contours.size()];
for (int i = 0; i < contours.size(); i++)
{
lastTrackBoxR[i] = trackBox[i];
}
lastTrackNumR = contours.size();
}
delete[] trackBox;
}
else
{
isFirstDetectedR = true;
result = NULL;
}
cvReleaseMemStorage(&storage);
if (result != NULL)
{
for (int i = 0; i < resultNum; i++)
{
area += result[i].area();
}
}
delete[] result;
return area;
}
int processImgG(Mat src)
{
Mat tmp;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
vector< Point > hull;
CvPoint2D32f tempNode;
CvMemStorage* storage = cvCreateMemStorage();
CvSeq* pointSeq = cvCreateSeq(CV_32FC2, sizeof(CvSeq), sizeof(CvPoint2D32f), storage);
Rect* trackBox;
Rect* result;
int resultNum = 0;
int area = 0;
src.copyTo(tmp);
//提取輪廓
findContours(tmp, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
if (contours.size() > 0)
{
trackBox = new Rect[contours.size()];
result = new Rect[contours.size()];
// 確定要跟蹤的區(qū)域
for (int i = 0; i < contours.size(); i++)
{
cvClearSeq(pointSeq);
// 獲取凸包的點(diǎn)集
convexHull(Mat(contours[i]), hull, true);
int hullcount = (int)hull.size();
// 保存凸包的點(diǎn)
for (int j = 0; j < hullcount - 1; j++)
{
tempNode.x = hull[j].x;
tempNode.y = hull[j].y;
cvSeqPush(pointSeq, &tempNode);
}
trackBox[i] = cvBoundingRect(pointSeq);
}
if (isFirstDetectedG)
{
lastTrackBoxG = new Rect[contours.size()];
for (int i = 0; i < contours.size(); i++)
lastTrackBoxG[i] = trackBox[i];
lastTrackNumG = contours.size();
isFirstDetectedG = false;
}
else
{
for (int i = 0; i < contours.size(); i++)
{
for (int j = 0; j < lastTrackNumG; j++)
{
if (isIntersected(trackBox[i], lastTrackBoxG[j]))
{
result[resultNum] = trackBox[i];
break;
}
}
resultNum++;
}
delete[] lastTrackBoxG;
lastTrackBoxG = new Rect[contours.size()];
for (int i = 0; i < contours.size(); i++)
{
lastTrackBoxG[i] = trackBox[i];
}
lastTrackNumG = contours.size();
}
delete[] trackBox;
}
else
{
isFirstDetectedG = true;
result = NULL;
}
cvReleaseMemStorage(&storage);
if (result != NULL)
{
for (int i = 0; i < resultNum; i++)
{
area += result[i].area();
}
}
delete[] result;
return area;
}
//確定兩個矩形區(qū)域是否相交
bool isIntersected(Rect r1, Rect r2)
{
int minX = max(r1.x, r2.x);
int minY = max(r1.y, r2.y);
int maxX = min(r1.x + r1.width, r2.x + r2.width);
int maxY = min(r1.y + r1.height, r2.y + r2.height);
if (minX < maxX && minY < maxY)
return true;
else
return false;
}三、總結(jié)
本文主要講解OpenCV輪廓識別原理以及OpenCV紅綠燈識別的實現(xiàn)具體步驟OpenCV還是有很多識別的庫函數(shù)可以用,接下來繼續(xù)探索,結(jié)合生活實際繼續(xù)做一些有意思的案例
到此這篇關(guān)于C++ OpenCV實現(xiàn)之實現(xiàn)紅綠燈識別的文章就介紹到這了,更多相關(guān)C++ OpenCV紅綠燈識別內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!
相關(guān)文章
MFC擴(kuò)展DLL中導(dǎo)出類和對話框的實現(xiàn)方法
這篇文章主要介紹了MFC擴(kuò)展DLL中導(dǎo)出類和對話框的實現(xiàn)方法,詳細(xì)講述了實現(xiàn)擴(kuò)展DLL中導(dǎo)出類和對話框的具體步驟與方法,具有不錯的實用價值,需要的朋友可以參考下2014-10-10
文件編譯時出現(xiàn)multiple definition of ''xxxxxx''的具體解決方法
以下是對文件編譯時出現(xiàn)multiple definition of 'xxxxxx'的解決方法進(jìn)行了詳細(xì)的分析介紹,如也遇到此問題的朋友們可以過來參考下2013-07-07
C語言全方位講解指針與地址和數(shù)組函數(shù)堆空間的關(guān)系
指針是C語言中一個非常重要的概念,也是C語言的特色之一。使用指針可以對復(fù)雜數(shù)據(jù)進(jìn)行處理,能對計算機(jī)的內(nèi)存分配進(jìn)行控制,在函數(shù)調(diào)用中使用指針還可以返回多個值2022-04-04
C++數(shù)據(jù)結(jié)構(gòu)AVL樹全面分析
今天的這一篇博客,我要跟大家介紹一顆樹——AVL樹,它也是一顆二叉搜索樹,它就是在二叉搜索樹中加了一個平衡因子的概念在里面,下面我就來和大家聊一聊這棵樹是個怎么樣的樹2021-10-10

