Java+OpenCV調(diào)用攝像頭實現(xiàn)拍照功能
隨著我們對環(huán)境、Mat基本使用越來越熟練、Java Swing也逐步熟悉了起來。今天我們開始進(jìn)入OpenCV驅(qū)動攝像頭的幾個使用場景。
環(huán)境準(zhǔn)備
1.準(zhǔn)備好一個USB外接攝像頭,我這邊使用的有兩種,一種是普通的羅技攝像頭,一種是雙目攝像頭(將來用來做活檢);
2.eclipse 2021-12版;
3.JDK 11+,因為我們編寫swing要使用到Window Builder窗體設(shè)計器這個插件。在eclipse 2021-12版里要驅(qū)動Windows Builder窗體設(shè)計器我們必須要用JDK11及+;
4.使用Windows10環(huán)境編程。當(dāng)然我們也可以使用Mac,但是Mac下如果是JAVA驅(qū)動攝像頭有一個這樣的梗:那就是直接你在eclipse里無法直接調(diào)用攝像頭,它會報一個“This app has crashed because it attempted to access privacy-sensitive data without a usage description”或者是
OpenCV: not authorized to capture video (status 0), requesting...
OpenCV: can not spin main run loop from other thread, set OPENCV_AVFOUNDATION_SKIP_AUTH=1 to disable authorization request and perform it in your application.
OpenCV: camera failed to properly initialize
這樣的錯誤,這些錯誤都是因為Mac OS的權(quán)限問題導(dǎo)致,它意指你在Mac下沒權(quán)限去調(diào)用Mac內(nèi)置的一些設(shè)備。如果你用的是XCode寫Swift那么你可以通過info.plist來解決此問題。但因為是eclipse里啟動java main函數(shù),目前在Mac OS下無法解決eclipse內(nèi)運(yùn)行驅(qū)動Mac外設(shè)這一類問題。如果你在Mac OS內(nèi),要運(yùn)行OpenCV Java并驅(qū)動攝像頭,你必須把項目打成可執(zhí)行的jar包并且在command窗口下用java -jar 這樣的命令去啟動它。在啟動時你的Mac OS會提示你給這個command窗口要授權(quán),請點(diǎn)擊【是】并且用指紋或者是密碼授權(quán),然后再次在command窗口運(yùn)行java -jar opencv應(yīng)用,你就可以在Mac OS下使用java去驅(qū)動攝像頭了。因此這為我們的編碼調(diào)試帶來極大的不便,這就是為什么我們使用Windows10環(huán)境下開發(fā)opencv java的主要原因。
制作主界面

我們的主界面是一個Java Swing的JFrame應(yīng)用,它長成這個樣子

整體結(jié)構(gòu)介紹

我們把屏幕分成上下兩個區(qū)域,布局使用的是1024*768,帶有點(diǎn)擊關(guān)閉按鈕即關(guān)閉程序的自由布局:
setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setBounds(100, 100, 1024, 768); contentPane = new JPanel(); contentPane.setBorder(new EmptyBorder(5, 5, 5, 5)); setContentPane(contentPane); contentPane.setLayout(null);
上部區(qū)域
我們使用一個JPanel來分組叫cameraGroup,這個JPanel也是自由布局
JPanel cameraGroup = new JPanel(); cameraGroup.setBounds(10, 10, 988, 580); contentPane.add(cameraGroup); cameraGroup.setLayout(null);
然后在這個cameraGroup以左大右小,放置了兩個額外的JPanel:
- videoCamera
- videoPreview
其中的videoCamera是自定義的JPanel
protected static VideoPanel videoCamera = new VideoPanel();
它是用來顯示攝像頭開啟時不斷的把攝像頭內(nèi)取到的圖像“刷”到JPanel上顯示用的,代碼如下:
package org.mk.opencv;
import java.awt.*;
import java.awt.image.BufferedImage;
import javax.swing.*;
import org.mk.opencv.util.ImageUtils;
import org.mk.opencv.util.OpenCVUtil;
import org.opencv.core.Mat;
public class VideoPanel extends JPanel {
private Image image;
public void setImageWithMat(Mat mat) {
image = OpenCVUtil.matToBufferedImage(mat);
this.repaint();
}
public void SetImageWithImg(Image img) {
image = img;
}
public Mat getMatFromImage() {
Mat faceMat = new Mat();
BufferedImage bi = ImageUtils.toBufferedImage(image);
faceMat = OpenCVUtil.bufferedImageToMat(bi);
return faceMat;
}
@Override
protected void paintComponent(Graphics g) {
super.paintComponent(g);
if (image != null)
g.drawImage(image, 0, 0, image.getWidth(null), image.getHeight(null), this);
}
public static VideoPanel show(String title, int width, int height, int open) {
JFrame frame = new JFrame(title);
if (open == 0) {
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
} else {
frame.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE);
}
frame.setSize(width, height);
frame.setBounds(0, 0, width, height);
VideoPanel videoPanel = new VideoPanel();
videoPanel.setSize(width, height);
frame.setContentPane(videoPanel);
frame.setVisible(true);
return videoPanel;
}
}下部區(qū)域
下部區(qū)域我們放置了一個buttonGroup。這個buttonGroup用的是“網(wǎng)袋布局”,上面放置三個按鈕。
JPanel buttonGroup = new JPanel(); buttonGroup.setBounds(65, 610, 710, 35); contentPane.add(buttonGroup); buttonGroup.setLayout(new GridLayout(1, 0, 0, 0));
今天我們就要實現(xiàn)photoButton里的功能。
說完了布局下面進(jìn)入核心代碼講解。
核心代碼與知識點(diǎn)講解
(最后會上全代碼)
JPanel中如何顯示攝像頭的圖像
JPanel這種組件一般是套在JFrame的contentPanel里的(這是用圖形化設(shè)計器生成的JFrame自帶的一個用來“盛”其它組件的容器)。
contentPane大家可以認(rèn)為是一種容器。它一般是這樣的一層關(guān)系:
JFrame(我們的主類)->contentPane->我們自己的上半部JPanel->videoCamera(JPanel)。
在Java Swing里有一個方法叫repaint()方法,這個方法 一旦被調(diào)用,這個組件的“子組件”內(nèi)的
protected void paintComponent(Graphics g)
都會自動被依次調(diào)用一遍。
因此,我們才自定義了一個JPanel叫VideoPanel,然后我們覆寫了它里面的paintComponent方法
@Override
protected void paintComponent(Graphics g) {
super.paintComponent(g);
if (image != null)
g.drawImage(image, 0, 0, image.getWidth(null), image.getHeight(null), this);
}
這樣,我們在我們的主類“FaceRecognize”里在通過攝像頭得到了圖像后把圖像通過VideoPanel里的“setImageWithMat”方法set后,馬上調(diào)用FaceRecognize自自的repaint方法,然后“父”事件一路向下傳導(dǎo),依次逐級把子組件進(jìn)行“刷新”-子組件的paintComponent都會被觸發(fā)一遍。
攝像頭得到圖像顯示在videoCamera區(qū)域的過程就是:
- 不斷通過FaceRecognize類里通過攝像頭讀到Mat對象;
- 把Mat對象set到VideoPanel里;
- 不斷調(diào)用FaceRecognize里的repaint方法迫使VideoPanel里“刷新”出攝像頭拍的內(nèi)容;
- 每顯示一次,sleep(50毫秒);
為了取得良好的刷新、連續(xù)不斷的顯示效果,你可以把上述方法套在一個“單線程”內(nèi)。
OpenCV調(diào)用攝像頭
OpenCV是使用以下這個類來驅(qū)動攝像頭的。
private static VideoCapture capture = new VideoCapture();
然后打開攝像頭,讀入攝像頭內(nèi)容如下
capture.open(0);
Scalar color = new Scalar(0, 255, 0);
MatOfRect faces = new MatOfRect();
if (capture.isOpened()) {
logger.info(">>>>>>video camera in working");
Mat faceMat = new Mat();
while (true) {
capture.read(faceMat);
if (!faceMat.empty()) {
faceCascade.detectMultiScale(faceMat, faces);
Rect[] facesArray = faces.toArray();
if (facesArray.length >= 1) {
for (int i = 0; i < facesArray.length; i++) {
Imgproc.rectangle(faceMat, facesArray[i].tl(), facesArray[i].br(), color, 2);
videoPanel.setImageWithMat(faceMat);
frame.repaint();
}
}
} else {
logger.info(">>>>>>not found anyinput");
break;
}
Thread.sleep(80);
}
}通過上述代碼我們可以看到我上面描述的4步。
- capture.open(0)代表讀取你的計算機(jī)當(dāng)前連接的第1個攝像頭,如果在mac上運(yùn)行這一句一些mac都帶有內(nèi)嵌攝像頭的,因此這一句代碼就會驅(qū)動mac的默認(rèn)內(nèi)置攝像頭;
- if(capture.isOpened()),必須要有,很多網(wǎng)上教程跳過了這一步檢測,導(dǎo)致攝像頭一直不出內(nèi)容其實最后才知道是攝像頭驅(qū)動有誤或者壞了,而不是代碼問題,最終耗費(fèi)了太多的排錯時間,其實結(jié)果是換一個攝像頭就好了;
- while(true)后跟著capture.read(faceMat),這一句就是不斷的讀取攝像頭的內(nèi)容,并把攝像頭的內(nèi)容讀到一個Mat對象中去;
前面說了,為了讓這個過程更“順滑”、“絲滑”,我把這個過程套到了一個單線程里讓它單獨(dú)運(yùn)行以不阻塞Java Swing的主界面。同時用“綠色”的方框把人臉在畫面里“框”出來。為此我制作了一個函數(shù)如下:
public void invokeCamera(JFrame frame, VideoPanel videoPanel) {
new Thread() {
public void run() {
CascadeClassifier faceCascade = new CascadeClassifier();
faceCascade.load(cascadeFileFullPath);
try {
capture.open(0);
Scalar color = new Scalar(0, 255, 0);
MatOfRect faces = new MatOfRect();
// Mat faceFrames = new Mat();
if (capture.isOpened()) {
logger.info(">>>>>>video camera in working");
Mat faceMat = new Mat();
while (true) {
capture.read(faceMat);
if (!faceMat.empty()) {
faceCascade.detectMultiScale(faceMat, faces);
Rect[] facesArray = faces.toArray();
if (facesArray.length >= 1) {
for (int i = 0; i < facesArray.length; i++) {
Imgproc.rectangle(faceMat, facesArray[i].tl(), facesArray[i].br(), color, 2);
videoPanel.setImageWithMat(faceMat);
frame.repaint();
// videoPanel.repaint();
}
}
} else {
logger.info(">>>>>>not found anyinput");
break;
}
Thread.sleep(80);
}
}
} catch (Exception e) {
logger.error("invoke camera error: " + e.getMessage(), e);
}
}
}.start();
}配合上我們的main方法就是這樣用的:
public static void main(String[] args) {
FaceRecognize frame = new FaceRecognize();
frame.setVisible(true);
frame.invokeCamera(frame, videoCamera);
}
使用攝像頭拍照
這一章節(jié)我們在OpenCV Java入門四 認(rèn)出這是“一張臉”里其實已經(jīng)講過了,就是把一個Mat輸出到一個jpg文件中。
在本篇章節(jié)中,我們?yōu)榱俗龅眯Ч靡稽c(diǎn)會做這么幾件事:
- 等比例把攝像頭拿到的Mat對象縮到“videoPreview”上;
- 把攝像頭當(dāng)前的Mat輸出到外部文件;
- 把上述過程也套到了一個單線程里以不阻塞主類的顯示界面;
等比例縮放圖片
位于ImageUtils類,它得到一個Mat,然后轉(zhuǎn)成java.awt.Image對象;
再利用Image里的AffineTransformOp根據(jù)ratio(圖像原比例)基于指定尺寸(寬:165, 高:200)的等比例縮放。再把Image轉(zhuǎn)成BufferedImage;
再把BufferedImage轉(zhuǎn)回Mat給到FaceRecognize主類用來作VideoPanel的“顯示”來顯示到我們的preview區(qū)域,而preview區(qū)域其實也是用到了VideoPanel這個類來聲明的;
為此我們對photoButton進(jìn)行事件編程
JButton photoButton = new JButton("Take Photo");
photoButton.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
logger.info(">>>>>>take photo performed");
StringBuffer photoPathStr = new StringBuffer();
photoPathStr.append(photoPath);
try {
if (capture.isOpened()) {
Mat myFace = new Mat();
while (true) {
capture.read(myFace);
if (!myFace.empty()) {
Image previewImg = ImageUtils.scale2(myFace, 165, 200, true);// 等比例縮放
TakePhotoProcess takePhoto = new TakePhotoProcess(photoPath.toString(), myFace);
takePhoto.start();// 照片寫盤
videoPreview.SetImageWithImg(previewImg);// 在預(yù)覽界面里顯示等比例縮放的照片
videoPreview.repaint();// 讓預(yù)覽界面重新渲染
break;
}
}
}
} catch (Exception ex) {
logger.error(">>>>>>take photo error: " + ex.getMessage(), ex);
}
}
});TakePhotoProcess是一個單線程,代碼如下:
package org.mk.opencv.sample;
import org.apache.log4j.Logger;
import org.opencv.core.Mat;
import org.opencv.core.Scalar;
import org.opencv.imgcodecs.Imgcodecs;
public class TakePhotoProcess extends Thread {
private static Logger logger = Logger.getLogger(TakePhotoProcess.class);
private String imgPath;
private Mat faceMat;
private final static Scalar color = new Scalar(0, 0, 255);
public TakePhotoProcess(String imgPath, Mat faceMat) {
this.imgPath = imgPath;
this.faceMat = faceMat;
}
public void run() {
try {
long currentTime = System.currentTimeMillis();
StringBuffer samplePath = new StringBuffer();
samplePath.append(imgPath).append(currentTime).append(".jpg");
Imgcodecs.imwrite(samplePath.toString(), faceMat);
logger.info(">>>>>>write image into->" + samplePath.toString());
} catch (Exception e) {
logger.error(e.getMessage(), e);
}
}
}另外兩個按鈕“trainButton”和"identifyButton"我們留到后面2個篇章里去講,我們一步一步來,這樣大家才能夯實基礎(chǔ)。
最終這個FaceRecognize運(yùn)行起來,然后點(diǎn)擊photoButton后的效果如下圖所示:
完整代碼
OpenCVUtil.java
package org.mk.opencv.util;
import java.awt.Image;
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;
import java.io.File;
import org.apache.log4j.Logger;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
public class OpenCVUtil {
private static Logger logger = Logger.getLogger(OpenCVUtil.class);
public static Image matToImage(Mat matrix) {
int type = BufferedImage.TYPE_BYTE_GRAY;
if (matrix.channels() > 1) {
type = BufferedImage.TYPE_3BYTE_BGR;
}
int bufferSize = matrix.channels() * matrix.cols() * matrix.rows();
byte[] buffer = new byte[bufferSize];
matrix.get(0, 0, buffer); // 獲取所有的像素點(diǎn)
BufferedImage image = new BufferedImage(matrix.cols(), matrix.rows(), type);
final byte[] targetPixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
System.arraycopy(buffer, 0, targetPixels, 0, buffer.length);
return image;
}
public static List<String> getFilesFromFolder(String folderPath) {
List<String> fileList = new ArrayList<String>();
File f = new File(folderPath);
if (f.isDirectory()) {
File[] files = f.listFiles();
for (File singleFile : files) {
fileList.add(singleFile.getPath());
}
}
return fileList;
}
public static String randomFileName() {
StringBuffer fn = new StringBuffer();
fn.append(System.currentTimeMillis()).append((int) (System.currentTimeMillis() % (10000 - 1) + 1))
.append(".jpg");
return fn.toString();
}
public static List<FileBean> getPicFromFolder(String rootPath) {
List<FileBean> fList = new ArrayList<FileBean>();
int fileNum = 0, folderNum = 0;
File file = new File(rootPath);
if (file.exists()) {
LinkedList<File> list = new LinkedList<File>();
File[] files = file.listFiles();
for (File file2 : files) {
if (file2.isDirectory()) {
// logger.info(">>>>>>文件夾:" + file2.getAbsolutePath());
list.add(file2);
folderNum++;
} else {
// logger.info(">>>>>>文件:" + file2.getAbsolutePath());
FileBean f = new FileBean();
String fileName = file2.getName();
String suffix = fileName.substring(fileName.lastIndexOf(".") + 1);
File fParent = new File(file2.getParent());
String parentFolderName = fParent.getName();
f.setFileFullPath(file2.getAbsolutePath());
f.setFileType(suffix);
f.setFolderName(parentFolderName);
fList.add(f);
fileNum++;
}
}
File temp_file;
while (!list.isEmpty()) {
temp_file = list.removeFirst();
files = temp_file.listFiles();
for (File file2 : files) {
if (file2.isDirectory()) {
// System.out.println("文件夾:" + file2.getAbsolutePath());
list.add(file2);
folderNum++;
} else {
// logger.info(">>>>>>文件:" + file2.getAbsolutePath());
FileBean f = new FileBean();
String fileName = file2.getName();
String suffix = fileName.substring(fileName.lastIndexOf(".") + 1);
File fParent = new File(file2.getParent());
String parentFolderName = fParent.getName();
f.setFileFullPath(file2.getAbsolutePath());
f.setFileType(suffix);
f.setFolderName(parentFolderName);
fList.add(f);
fileNum++;
}
}
}
} else {
logger.info(">>>>>>文件不存在!");
}
// logger.info(">>>>>>文件夾共有:" + folderNum + ",文件共有:" + fileNum);
return fList;
}
public static BufferedImage matToBufferedImage(Mat matrix) {
int cols = matrix.cols();
int rows = matrix.rows();
int elemSize = (int) matrix.elemSize();
byte[] data = new byte[cols * rows * elemSize];
int type;
matrix.get(0, 0, data);
switch (matrix.channels()) {
case 1:
type = BufferedImage.TYPE_BYTE_GRAY;
break;
case 3:
type = BufferedImage.TYPE_3BYTE_BGR;
// bgr to rgb
byte b;
for (int i = 0; i < data.length; i = i + 3) {
b = data[i];
data[i] = data[i + 2];
data[i + 2] = b;
}
break;
default:
return null;
}
BufferedImage image2 = new BufferedImage(cols, rows, type);
image2.getRaster().setDataElements(0, 0, cols, rows, data);
return image2;
}
public static Mat bufferedImageToMat(BufferedImage bi) {
Mat mat = new Mat(bi.getHeight(), bi.getWidth(), CvType.CV_8UC3);
byte[] data = ((DataBufferByte) bi.getRaster().getDataBuffer()).getData();
mat.put(0, 0, data);
return mat;
}
}ImageUtils.java
package org.mk.opencv.util;
import java.awt.Color;
import java.awt.Graphics;
import java.awt.Graphics2D;
import java.awt.GraphicsConfiguration;
import java.awt.GraphicsDevice;
import java.awt.GraphicsEnvironment;
import java.awt.HeadlessException;
import java.awt.Image;
import java.awt.Transparency;
import java.awt.geom.AffineTransform;
import java.awt.image.AffineTransformOp;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
import javax.swing.ImageIcon;
import org.opencv.core.Mat;
public class ImageUtils {
/**
* 幾種常見的圖片格式
*/
public static String IMAGE_TYPE_GIF = "gif";// 圖形交換格式
public static String IMAGE_TYPE_JPG = "jpg";// 聯(lián)合照片專家組
public static String IMAGE_TYPE_JPEG = "jpeg";// 聯(lián)合照片專家組
public static String IMAGE_TYPE_BMP = "bmp";// 英文Bitmap(位圖)的簡寫,它是Windows操作系統(tǒng)中的標(biāo)準(zhǔn)圖像文件格式
public static String IMAGE_TYPE_PNG = "png";// 可移植網(wǎng)絡(luò)圖形
public static String IMAGE_TYPE_PSD = "psd";// Photoshop的專用格式Photoshop
/**
* 縮放圖像(按高度和寬度縮放)
*
* @param srcImageFile 源圖像文件地址
* @param result 縮放后的圖像地址
* @param height 縮放后的高度
* @param width 縮放后的寬度
* @param bb 比例不對時是否需要補(bǔ)白:true為補(bǔ)白; false為不補(bǔ)白;
*/
public final synchronized static Image scale2(Mat mat, int height, int width, boolean bb) throws Exception {
// boolean flg = false;
Image itemp = null;
try {
double ratio = 0.0; // 縮放比例
// File f = new File(srcImageFile);
// BufferedImage bi = ImageIO.read(f);
BufferedImage bi = OpenCVUtil.matToBufferedImage(mat);
itemp = bi.getScaledInstance(width, height, bi.SCALE_SMOOTH);
// 計算比例
// if ((bi.getHeight() > height) || (bi.getWidth() > width)) {
// flg = true;
if (bi.getHeight() > bi.getWidth()) {
ratio = Integer.valueOf(height).doubleValue() / bi.getHeight();
} else {
ratio = Integer.valueOf(width).doubleValue() / bi.getWidth();
}
AffineTransformOp op = new AffineTransformOp(AffineTransform.getScaleInstance(ratio, ratio), null);
itemp = op.filter(bi, null);
// }
if (bb) {// 補(bǔ)白
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
Graphics2D g = image.createGraphics();
g.setColor(Color.white);
g.fillRect(0, 0, width, height);
if (width == itemp.getWidth(null))
g.drawImage(itemp, 0, (height - itemp.getHeight(null)) / 2, itemp.getWidth(null),
itemp.getHeight(null), Color.white, null);
else
g.drawImage(itemp, (width - itemp.getWidth(null)) / 2, 0, itemp.getWidth(null),
itemp.getHeight(null), Color.white, null);
g.dispose();
itemp = image;
}
// if (flg)
// ImageIO.write((BufferedImage) itemp, "JPEG", new File(result));
} catch (Exception e) {
throw new Exception("scale2 error: " + e.getMessage(), e);
}
return itemp;
}
public static BufferedImage toBufferedImage(Image image) {
if (image instanceof BufferedImage) {
return (BufferedImage) image;
}
// 此代碼確保在圖像的所有像素被載入
image = new ImageIcon(image).getImage();
// 如果圖像有透明用這個方法
// boolean hasAlpha = hasAlpha(image);
// 創(chuàng)建一個可以在屏幕上共存的格式的bufferedimage
BufferedImage bimage = null;
GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment();
try {
// 確定新的緩沖圖像類型的透明度
int transparency = Transparency.OPAQUE;
// if (hasAlpha) {
transparency = Transparency.BITMASK;
// }
// 創(chuàng)造一個bufferedimage
GraphicsDevice gs = ge.getDefaultScreenDevice();
GraphicsConfiguration gc = gs.getDefaultConfiguration();
bimage = gc.createCompatibleImage(image.getWidth(null), image.getHeight(null), transparency);
} catch (HeadlessException e) {
// 系統(tǒng)不會有一個屏幕
}
if (bimage == null) {
// 創(chuàng)建一個默認(rèn)色彩的bufferedimage
int type = BufferedImage.TYPE_INT_RGB;
// int type = BufferedImage.TYPE_3BYTE_BGR;//by wang
// if (hasAlpha) {
type = BufferedImage.TYPE_INT_ARGB;
// }
bimage = new BufferedImage(image.getWidth(null), image.getHeight(null), type);
}
// 把圖像復(fù)制到bufferedimage上
Graphics g = bimage.createGraphics();
// 把圖像畫到bufferedimage上
g.drawImage(image, 0, 0, null);
g.dispose();
return bimage;
}
}FileBean.java
package org.mk.opencv.util;
import java.io.Serializable;
public class FileBean implements Serializable {
private String fileFullPath;
private String folderName;
private String fileType;
public String getFileType() {
return fileType;
}
public void setFileType(String fileType) {
this.fileType = fileType;
}
public String getFileFullPath() {
return fileFullPath;
}
public void setFileFullPath(String fileFullPath) {
this.fileFullPath = fileFullPath;
}
public String getFolderName() {
return folderName;
}
public void setFolderName(String folderName) {
this.folderName = folderName;
}
}VideoPanel.java
package org.mk.opencv;
import java.awt.*;
import java.awt.image.BufferedImage;
import javax.swing.*;
import org.mk.opencv.util.ImageUtils;
import org.mk.opencv.util.OpenCVUtil;
import org.opencv.core.Mat;
public class VideoPanel extends JPanel {
private Image image;
public void setImageWithMat(Mat mat) {
image = OpenCVUtil.matToBufferedImage(mat);
this.repaint();
}
public void SetImageWithImg(Image img) {
image = img;
}
public Mat getMatFromImage() {
Mat faceMat = new Mat();
BufferedImage bi = ImageUtils.toBufferedImage(image);
faceMat = OpenCVUtil.bufferedImageToMat(bi);
return faceMat;
}
@Override
protected void paintComponent(Graphics g) {
super.paintComponent(g);
if (image != null)
g.drawImage(image, 0, 0, image.getWidth(null), image.getHeight(null), this);
}
public static VideoPanel show(String title, int width, int height, int open) {
JFrame frame = new JFrame(title);
if (open == 0) {
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
} else {
frame.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE);
}
frame.setSize(width, height);
frame.setBounds(0, 0, width, height);
VideoPanel videoPanel = new VideoPanel();
videoPanel.setSize(width, height);
frame.setContentPane(videoPanel);
frame.setVisible(true);
return videoPanel;
}
}TakePhotoProcess.java
package org.mk.opencv.sample;
import org.apache.log4j.Logger;
import org.opencv.core.Mat;
import org.opencv.core.Scalar;
import org.opencv.imgcodecs.Imgcodecs;
public class TakePhotoProcess extends Thread {
private static Logger logger = Logger.getLogger(TakePhotoProcess.class);
private String imgPath;
private Mat faceMat;
private final static Scalar color = new Scalar(0, 0, 255);
public TakePhotoProcess(String imgPath, Mat faceMat) {
this.imgPath = imgPath;
this.faceMat = faceMat;
}
public void run() {
try {
long currentTime = System.currentTimeMillis();
StringBuffer samplePath = new StringBuffer();
samplePath.append(imgPath).append(currentTime).append(".jpg");
Imgcodecs.imwrite(samplePath.toString(), faceMat);
logger.info(">>>>>>write image into->" + samplePath.toString());
} catch (Exception e) {
logger.error(e.getMessage(), e);
}
}
}FaceRecognize.java(核心主類)
package org.mk.opencv.sample;
import java.awt.EventQueue;
import javax.swing.JFrame;
import javax.swing.JPanel;
import javax.swing.border.EmptyBorder;
import org.apache.log4j.Logger;
import org.mk.opencv.VideoPanel;
import org.mk.opencv.face.TakePhotoProcess;
import org.mk.opencv.util.ImageUtils;
import org.mk.opencv.util.OpenCVUtil;
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.core.MatOfRect;
import org.opencv.core.Rect;
import org.opencv.core.Scalar;
import org.opencv.core.Size;
import org.opencv.imgproc.Imgproc;
import org.opencv.objdetect.CascadeClassifier;
import org.opencv.videoio.VideoCapture;
import javax.swing.border.BevelBorder;
import javax.swing.JLabel;
import javax.swing.SwingConstants;
import java.awt.GridLayout;
import java.awt.Image;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import javax.swing.JButton;
public class FaceRecognize extends JFrame {
static {
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
}
private static Logger logger = Logger.getLogger(FaceRecognize.class);
private static final String cascadeFileFullPath = "D:\\opencvinstall\\build\\install\\etc\\lbpcascades\\lbpcascade_frontalface.xml";
private static final String photoPath = "D:\\opencv-demo\\face\\";
private JPanel contentPane;
protected static VideoPanel videoCamera = new VideoPanel();
private static final Size faceSize = new Size(165, 200);
private static VideoCapture capture = new VideoCapture();
/**
* Launch the application.
*/
public static void main(String[] args) {
FaceRecognize frame = new FaceRecognize();
frame.setVisible(true);
frame.invokeCamera(frame, videoCamera);
}
public void invokeCamera(JFrame frame, VideoPanel videoPanel) {
new Thread() {
public void run() {
CascadeClassifier faceCascade = new CascadeClassifier();
faceCascade.load(cascadeFileFullPath);
try {
capture.open(0);
Scalar color = new Scalar(0, 255, 0);
MatOfRect faces = new MatOfRect();
// Mat faceFrames = new Mat();
if (capture.isOpened()) {
logger.info(">>>>>>video camera in working");
Mat faceMat = new Mat();
while (true) {
capture.read(faceMat);
if (!faceMat.empty()) {
faceCascade.detectMultiScale(faceMat, faces);
Rect[] facesArray = faces.toArray();
if (facesArray.length >= 1) {
for (int i = 0; i < facesArray.length; i++) {
Imgproc.rectangle(faceMat, facesArray[i].tl(), facesArray[i].br(), color, 2);
videoPanel.setImageWithMat(faceMat);
frame.repaint();
// videoPanel.repaint();
}
}
} else {
logger.info(">>>>>>not found anyinput");
break;
}
Thread.sleep(80);
}
}
} catch (Exception e) {
logger.error("invoke camera error: " + e.getMessage(), e);
}
}
}.start();
}
/**
* Create the frame.
*/
public FaceRecognize() {
setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
setBounds(100, 100, 1024, 768);
contentPane = new JPanel();
contentPane.setBorder(new EmptyBorder(5, 5, 5, 5));
setContentPane(contentPane);
contentPane.setLayout(null);
JPanel cameraGroup = new JPanel();
cameraGroup.setBounds(10, 10, 988, 580);
contentPane.add(cameraGroup);
cameraGroup.setLayout(null);
JLabel videoDescriptionLabel = new JLabel("Video");
videoDescriptionLabel.setHorizontalAlignment(SwingConstants.CENTER);
videoDescriptionLabel.setBounds(0, 10, 804, 23);
cameraGroup.add(videoDescriptionLabel);
videoCamera.setBorder(new BevelBorder(BevelBorder.LOWERED, null, null, null, null));
videoCamera.setBounds(10, 43, 794, 527);
cameraGroup.add(videoCamera);
// JPanel videoPreview = new JPanel();
VideoPanel videoPreview = new VideoPanel();
videoPreview.setBorder(new BevelBorder(BevelBorder.LOWERED, null, null, null, null));
videoPreview.setBounds(807, 359, 171, 211);
cameraGroup.add(videoPreview);
JLabel lblNewLabel = new JLabel("Preview");
lblNewLabel.setHorizontalAlignment(SwingConstants.CENTER);
lblNewLabel.setBounds(807, 307, 171, 42);
cameraGroup.add(lblNewLabel);
JPanel buttonGroup = new JPanel();
buttonGroup.setBounds(65, 610, 710, 35);
contentPane.add(buttonGroup);
buttonGroup.setLayout(new GridLayout(1, 0, 0, 0));
JButton photoButton = new JButton("Take Photo");
photoButton.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
logger.info(">>>>>>take photo performed");
StringBuffer photoPathStr = new StringBuffer();
photoPathStr.append(photoPath);
try {
if (capture.isOpened()) {
Mat myFace = new Mat();
while (true) {
capture.read(myFace);
if (!myFace.empty()) {
Image previewImg = ImageUtils.scale2(myFace, 165, 200, true);// 等比例縮放
TakePhotoProcess takePhoto = new TakePhotoProcess(photoPath.toString(), myFace);
takePhoto.start();// 照片寫盤
videoPreview.SetImageWithImg(previewImg);// 在預(yù)覽界面里顯示等比例縮放的照片
videoPreview.repaint();// 讓預(yù)覽界面重新渲染
break;
}
}
}
} catch (Exception ex) {
logger.error(">>>>>>take photo error: " + ex.getMessage(), ex);
}
}
});
buttonGroup.add(photoButton);
JButton trainButton = new JButton("Train");
buttonGroup.add(trainButton);
JButton identifyButton = new JButton("Identify");
buttonGroup.add(identifyButton);
}
}
到此這篇關(guān)于Java+OpenCV調(diào)用攝像頭實現(xiàn)拍照功能的文章就介紹到這了,更多相關(guān)Java OpenCV拍照內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!
相關(guān)文章
Java 數(shù)據(jù)結(jié)構(gòu)鏈表操作實現(xiàn)代碼
這篇文章主要介紹了Java 數(shù)據(jù)結(jié)構(gòu)鏈表操作的相關(guān)資料,并附實例代碼,需要的朋友可以參考下2016-10-10
關(guān)于Java應(yīng)用日志與Jaeger的trace關(guān)聯(lián)的問題
這篇文章主要介紹了Java應(yīng)用日志如何與Jaeger的trace關(guān)聯(lián),通過jaeger發(fā)現(xiàn)這十次請求中有一次耗時特別長,想定位一下具體原因,感興趣的朋友跟隨小編一起看看吧2022-01-01
springboot+swagger2.10.5+mybatis-plus 入門詳解
這篇文章主要介紹了springboot+swagger2.10.5+mybatis-plus 入門,本文通過實例圖文相結(jié)合給大家介紹的非常詳細(xì),對大家的學(xué)習(xí)或工作具有一定的參考借鑒價值,需要的朋友可以參考下2020-12-12
SpringTask實現(xiàn)定時任務(wù)方法講解
通過重寫Schedu lingConfigurer方法實現(xiàn)對定時任務(wù)的操作,單次執(zhí)行、停止、啟動三個主要的基本功能,動態(tài)的從數(shù)據(jù)庫中獲取配置的定時任務(wù)cron信息,通過反射的方式靈活定位到具體的類與方法中2023-02-02
Java 格式化輸出JSON字符串的2種實現(xiàn)操作
這篇文章主要介紹了Java 格式化輸出JSON字符串的2種實現(xiàn)操作,具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧2020-10-10
詳解springcloud之服務(wù)注冊與發(fā)現(xiàn)
本次分享的是關(guān)于springcloud服務(wù)注冊與發(fā)現(xiàn)的內(nèi)容,將通過分別搭建服務(wù)中心,服務(wù)注冊,服務(wù)發(fā)現(xiàn)來說明,非常具有實用價值,需要的朋友可以參考下2018-06-06

