欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

Java+OpenCV調(diào)用攝像頭實(shí)現(xiàn)拍照功能

 更新時(shí)間:2022年03月30日 14:01:32   作者:TGITCIC  
隨著我們對(duì)環(huán)境、Mat基本使用越來(lái)越熟練、Java Swing也逐步熟悉了起來(lái)。本文將通過(guò)OpenCV驅(qū)動(dòng)攝像頭實(shí)現(xiàn)識(shí)臉和拍照功能,需要的可以參考一下

隨著我們對(duì)環(huán)境、Mat基本使用越來(lái)越熟練、Java Swing也逐步熟悉了起來(lái)。今天我們開(kāi)始進(jìn)入OpenCV驅(qū)動(dòng)攝像頭的幾個(gè)使用場(chǎng)景。

環(huán)境準(zhǔn)備

1.準(zhǔn)備好一個(gè)USB外接攝像頭,我這邊使用的有兩種,一種是普通的羅技攝像頭,一種是雙目攝像頭(將來(lái)用來(lái)做活檢);

2.eclipse 2021-12版;

3.JDK 11+,因?yàn)槲覀兙帉憇wing要使用到Window Builder窗體設(shè)計(jì)器這個(gè)插件。在eclipse 2021-12版里要驅(qū)動(dòng)Windows Builder窗體設(shè)計(jì)器我們必須要用JDK11及+;

4.使用Windows10環(huán)境編程。當(dāng)然我們也可以使用Mac,但是Mac下如果是JAVA驅(qū)動(dòng)攝像頭有一個(gè)這樣的梗:那就是直接你在eclipse里無(wú)法直接調(diào)用攝像頭,它會(huì)報(bào)一個(gè)“This app has crashed because it attempted to access privacy-sensitive data without a usage description”或者是

OpenCV: not authorized to capture video (status 0), requesting...
OpenCV: can not spin main run loop from other thread, set OPENCV_AVFOUNDATION_SKIP_AUTH=1 to disable authorization request and perform it in your application.
OpenCV: camera failed to properly initialize

這樣的錯(cuò)誤,這些錯(cuò)誤都是因?yàn)镸ac OS的權(quán)限問(wèn)題導(dǎo)致,它意指你在Mac下沒(méi)權(quán)限去調(diào)用Mac內(nèi)置的一些設(shè)備。如果你用的是XCode寫Swift那么你可以通過(guò)info.plist來(lái)解決此問(wèn)題。但因?yàn)槭莈clipse里啟動(dòng)java main函數(shù),目前在Mac OS下無(wú)法解決eclipse內(nèi)運(yùn)行驅(qū)動(dòng)Mac外設(shè)這一類問(wèn)題。如果你在Mac OS內(nèi),要運(yùn)行OpenCV Java并驅(qū)動(dòng)攝像頭,你必須把項(xiàng)目打成可執(zhí)行的jar包并且在command窗口下用java -jar 這樣的命令去啟動(dòng)它。在啟動(dòng)時(shí)你的Mac OS會(huì)提示你給這個(gè)command窗口要授權(quán),請(qǐng)點(diǎn)擊【是】并且用指紋或者是密碼授權(quán),然后再次在command窗口運(yùn)行java -jar opencv應(yīng)用,你就可以在Mac OS下使用java去驅(qū)動(dòng)攝像頭了。因此這為我們的編碼調(diào)試帶來(lái)極大的不便,這就是為什么我們使用Windows10環(huán)境下開(kāi)發(fā)opencv java的主要原因。

制作主界面

我們的主界面是一個(gè)Java Swing的JFrame應(yīng)用,它長(zhǎng)成這個(gè)樣子

整體結(jié)構(gòu)介紹

我們把屏幕分成上下兩個(gè)區(qū)域,布局使用的是1024*768,帶有點(diǎn)擊關(guān)閉按鈕即關(guān)閉程序的自由布局:

setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
setBounds(100, 100, 1024, 768);
contentPane = new JPanel();
contentPane.setBorder(new EmptyBorder(5, 5, 5, 5));
setContentPane(contentPane);
contentPane.setLayout(null);

上部區(qū)域

我們使用一個(gè)JPanel來(lái)分組叫cameraGroup,這個(gè)JPanel也是自由布局

JPanel cameraGroup = new JPanel();
cameraGroup.setBounds(10, 10, 988, 580);
contentPane.add(cameraGroup);
cameraGroup.setLayout(null);

然后在這個(gè)cameraGroup以左大右小,放置了兩個(gè)額外的JPanel:

  • videoCamera
  • videoPreview

其中的videoCamera是自定義的JPanel

protected static VideoPanel videoCamera = new VideoPanel();

它是用來(lái)顯示攝像頭開(kāi)啟時(shí)不斷的把攝像頭內(nèi)取到的圖像“刷”到JPanel上顯示用的,代碼如下:

package org.mk.opencv;
 
import java.awt.*;
import java.awt.image.BufferedImage;
import javax.swing.*;
 
import org.mk.opencv.util.ImageUtils;
import org.mk.opencv.util.OpenCVUtil;
import org.opencv.core.Mat;
 
public class VideoPanel extends JPanel {
 
    private Image image;
 
    public void setImageWithMat(Mat mat) {
        image = OpenCVUtil.matToBufferedImage(mat);
        this.repaint();
    }
 
    public void SetImageWithImg(Image img) {
        image = img;
    }
 
    public Mat getMatFromImage() {
        Mat faceMat = new Mat();
        BufferedImage bi = ImageUtils.toBufferedImage(image);
        faceMat = OpenCVUtil.bufferedImageToMat(bi);
        return faceMat;
    }
 
    @Override
    protected void paintComponent(Graphics g) {
        super.paintComponent(g);
        if (image != null)
            g.drawImage(image, 0, 0, image.getWidth(null), image.getHeight(null), this);
    }
 
    public static VideoPanel show(String title, int width, int height, int open) {
        JFrame frame = new JFrame(title);
        if (open == 0) {
            frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
        } else {
            frame.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE);
        }
 
        frame.setSize(width, height);
        frame.setBounds(0, 0, width, height);
        VideoPanel videoPanel = new VideoPanel();
        videoPanel.setSize(width, height);
        frame.setContentPane(videoPanel);
        frame.setVisible(true);
        return videoPanel;
    }
}

下部區(qū)域

下部區(qū)域我們放置了一個(gè)buttonGroup。這個(gè)buttonGroup用的是“網(wǎng)袋布局”,上面放置三個(gè)按鈕。

JPanel buttonGroup = new JPanel();
buttonGroup.setBounds(65, 610, 710, 35);
contentPane.add(buttonGroup);
buttonGroup.setLayout(new GridLayout(1, 0, 0, 0));

今天我們就要實(shí)現(xiàn)photoButton里的功能。

說(shuō)完了布局下面進(jìn)入核心代碼講解。

核心代碼與知識(shí)點(diǎn)講解

(最后會(huì)上全代碼)

JPanel中如何顯示攝像頭的圖像

JPanel這種組件一般是套在JFrame的contentPanel里的(這是用圖形化設(shè)計(jì)器生成的JFrame自帶的一個(gè)用來(lái)“盛”其它組件的容器)。

contentPane大家可以認(rèn)為是一種容器。它一般是這樣的一層關(guān)系:

JFrame(我們的主類)->contentPane->我們自己的上半部JPanel->videoCamera(JPanel)。

在Java Swing里有一個(gè)方法叫repaint()方法,這個(gè)方法 一旦被調(diào)用,這個(gè)組件的“子組件”內(nèi)的

protected void paintComponent(Graphics g)

都會(huì)自動(dòng)被依次調(diào)用一遍。

因此,我們才自定義了一個(gè)JPanel叫VideoPanel,然后我們覆寫了它里面的paintComponent方法

    @Override
    protected void paintComponent(Graphics g) {
        super.paintComponent(g);
        if (image != null)
            g.drawImage(image, 0, 0, image.getWidth(null), image.getHeight(null), this);
    }

這樣,我們?cè)谖覀兊闹黝?ldquo;FaceRecognize”里在通過(guò)攝像頭得到了圖像后把圖像通過(guò)VideoPanel里的“setImageWithMat”方法set后,馬上調(diào)用FaceRecognize自自的repaint方法,然后“父”事件一路向下傳導(dǎo),依次逐級(jí)把子組件進(jìn)行“刷新”-子組件的paintComponent都會(huì)被觸發(fā)一遍。

攝像頭得到圖像顯示在videoCamera區(qū)域的過(guò)程就是:

  • 不斷通過(guò)FaceRecognize類里通過(guò)攝像頭讀到Mat對(duì)象;
  • 把Mat對(duì)象set到VideoPanel里;
  • 不斷調(diào)用FaceRecognize里的repaint方法迫使VideoPanel里“刷新”出攝像頭拍的內(nèi)容;
  • 每顯示一次,sleep(50毫秒);

為了取得良好的刷新、連續(xù)不斷的顯示效果,你可以把上述方法套在一個(gè)“單線程”內(nèi)。

OpenCV調(diào)用攝像頭

OpenCV是使用以下這個(gè)類來(lái)驅(qū)動(dòng)攝像頭的。

private static VideoCapture capture = new VideoCapture();

然后打開(kāi)攝像頭,讀入攝像頭內(nèi)容如下

capture.open(0);
Scalar color = new Scalar(0, 255, 0);
MatOfRect faces = new MatOfRect();
if (capture.isOpened()) {
    logger.info(">>>>>>video camera in working");
    Mat faceMat = new Mat();
    while (true) {
        capture.read(faceMat);
        if (!faceMat.empty()) {
            faceCascade.detectMultiScale(faceMat, faces);
            Rect[] facesArray = faces.toArray();
            if (facesArray.length >= 1) {
                for (int i = 0; i < facesArray.length; i++) {
                    Imgproc.rectangle(faceMat, facesArray[i].tl(), facesArray[i].br(), color, 2);
                    videoPanel.setImageWithMat(faceMat);
                    frame.repaint();                                    
                }
            }
        } else {
            logger.info(">>>>>>not found anyinput");
            break;
        }
        Thread.sleep(80);
    }
}

通過(guò)上述代碼我們可以看到我上面描述的4步。

  • capture.open(0)代表讀取你的計(jì)算機(jī)當(dāng)前連接的第1個(gè)攝像頭,如果在mac上運(yùn)行這一句一些mac都帶有內(nèi)嵌攝像頭的,因此這一句代碼就會(huì)驅(qū)動(dòng)mac的默認(rèn)內(nèi)置攝像頭;
  • if(capture.isOpened()),必須要有,很多網(wǎng)上教程跳過(guò)了這一步檢測(cè),導(dǎo)致攝像頭一直不出內(nèi)容其實(shí)最后才知道是攝像頭驅(qū)動(dòng)有誤或者壞了,而不是代碼問(wèn)題,最終耗費(fèi)了太多的排錯(cuò)時(shí)間,其實(shí)結(jié)果是換一個(gè)攝像頭就好了;
  • while(true)后跟著capture.read(faceMat),這一句就是不斷的讀取攝像頭的內(nèi)容,并把攝像頭的內(nèi)容讀到一個(gè)Mat對(duì)象中去;

前面說(shuō)了,為了讓這個(gè)過(guò)程更“順滑”、“絲滑”,我把這個(gè)過(guò)程套到了一個(gè)單線程里讓它單獨(dú)運(yùn)行以不阻塞Java Swing的主界面。同時(shí)用“綠色”的方框把人臉在畫面里“框”出來(lái)。為此我制作了一個(gè)函數(shù)如下:

    public void invokeCamera(JFrame frame, VideoPanel videoPanel) {
        new Thread() {
            public void run() {
                CascadeClassifier faceCascade = new CascadeClassifier();
                faceCascade.load(cascadeFileFullPath);
                try {
                    capture.open(0);
                    Scalar color = new Scalar(0, 255, 0);
                    MatOfRect faces = new MatOfRect();
                    // Mat faceFrames = new Mat();
                    if (capture.isOpened()) {
                        logger.info(">>>>>>video camera in working");
                        Mat faceMat = new Mat();
                        while (true) {
                            capture.read(faceMat);
                            if (!faceMat.empty()) {
                                faceCascade.detectMultiScale(faceMat, faces);
                                Rect[] facesArray = faces.toArray();
                                if (facesArray.length >= 1) {
                                    for (int i = 0; i < facesArray.length; i++) {
                                        Imgproc.rectangle(faceMat, facesArray[i].tl(), facesArray[i].br(), color, 2);
                                        videoPanel.setImageWithMat(faceMat);
                                        frame.repaint();
                                        // videoPanel.repaint();
                                    }
                                }
                            } else {
                                logger.info(">>>>>>not found anyinput");
                                break;
                            }
                            Thread.sleep(80);
                        }
                    }
                } catch (Exception e) {
                    logger.error("invoke camera error: " + e.getMessage(), e);
                }
            }
        }.start();
    }

配合上我們的main方法就是這樣用的:

    public static void main(String[] args) {
        FaceRecognize frame = new FaceRecognize();
        frame.setVisible(true);
        frame.invokeCamera(frame, videoCamera);
    }

使用攝像頭拍照

這一章節(jié)我們?cè)?a href="http://www.dbjr.com.cn/article/242428.htm" target="_blank">OpenCV Java入門四 認(rèn)出這是“一張臉”里其實(shí)已經(jīng)講過(guò)了,就是把一個(gè)Mat輸出到一個(gè)jpg文件中。

在本篇章節(jié)中,我們?yōu)榱俗龅眯Ч靡稽c(diǎn)會(huì)做這么幾件事:

  • 等比例把攝像頭拿到的Mat對(duì)象縮到“videoPreview”上;
  • 把攝像頭當(dāng)前的Mat輸出到外部文件;
  • 把上述過(guò)程也套到了一個(gè)單線程里以不阻塞主類的顯示界面;

等比例縮放圖片

位于ImageUtils類,它得到一個(gè)Mat,然后轉(zhuǎn)成java.awt.Image對(duì)象;

再利用Image里的AffineTransformOp根據(jù)ratio(圖像原比例)基于指定尺寸(寬:165, 高:200)的等比例縮放。再把Image轉(zhuǎn)成BufferedImage;

再把BufferedImage轉(zhuǎn)回Mat給到FaceRecognize主類用來(lái)作VideoPanel的“顯示”來(lái)顯示到我們的preview區(qū)域,而preview區(qū)域其實(shí)也是用到了VideoPanel這個(gè)類來(lái)聲明的;

為此我們對(duì)photoButton進(jìn)行事件編程

JButton photoButton = new JButton("Take Photo");
        photoButton.addActionListener(new ActionListener() {
            public void actionPerformed(ActionEvent e) {
                logger.info(">>>>>>take photo performed");
                StringBuffer photoPathStr = new StringBuffer();
                photoPathStr.append(photoPath);
                try {
                    if (capture.isOpened()) {
                        Mat myFace = new Mat();
                        while (true) {
                            capture.read(myFace);
                            if (!myFace.empty()) {
                                Image previewImg = ImageUtils.scale2(myFace, 165, 200, true);// 等比例縮放
                                TakePhotoProcess takePhoto = new TakePhotoProcess(photoPath.toString(), myFace);
                                takePhoto.start();// 照片寫盤
                                videoPreview.SetImageWithImg(previewImg);// 在預(yù)覽界面里顯示等比例縮放的照片
                                videoPreview.repaint();// 讓預(yù)覽界面重新渲染
                                break;
                            }
                        }
                    }
                } catch (Exception ex) {
                    logger.error(">>>>>>take photo error: " + ex.getMessage(), ex);
                }
            }
        });

TakePhotoProcess是一個(gè)單線程,代碼如下:

package org.mk.opencv.sample;
 
import org.apache.log4j.Logger;
import org.opencv.core.Mat;
import org.opencv.core.Scalar;
import org.opencv.imgcodecs.Imgcodecs;
 
public class TakePhotoProcess extends Thread {
    private static Logger logger = Logger.getLogger(TakePhotoProcess.class);
 
    private String imgPath;
    private Mat faceMat;
    private final static Scalar color = new Scalar(0, 0, 255);
 
    public TakePhotoProcess(String imgPath, Mat faceMat) {
        this.imgPath = imgPath;
        this.faceMat = faceMat;
    }
 
    public void run() {
        try {
            long currentTime = System.currentTimeMillis();
            StringBuffer samplePath = new StringBuffer();
            samplePath.append(imgPath).append(currentTime).append(".jpg");
            Imgcodecs.imwrite(samplePath.toString(), faceMat);
            logger.info(">>>>>>write image into->" + samplePath.toString());
 
        } catch (Exception e) {
            logger.error(e.getMessage(), e);
        }
    }
 
}

另外兩個(gè)按鈕“trainButton”和"identifyButton"我們留到后面2個(gè)篇章里去講,我們一步一步來(lái),這樣大家才能夯實(shí)基礎(chǔ)。

最終這個(gè)FaceRecognize運(yùn)行起來(lái),然后點(diǎn)擊photoButton后的效果如下圖所示:

完整代碼

OpenCVUtil.java

package org.mk.opencv.util;
 
import java.awt.Image;
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;
import java.io.File;
 
import org.apache.log4j.Logger;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
 
 
public class OpenCVUtil {
    private static Logger logger = Logger.getLogger(OpenCVUtil.class);
 
    public static Image matToImage(Mat matrix) {
        int type = BufferedImage.TYPE_BYTE_GRAY;
        if (matrix.channels() > 1) {
            type = BufferedImage.TYPE_3BYTE_BGR;
        }
        int bufferSize = matrix.channels() * matrix.cols() * matrix.rows();
        byte[] buffer = new byte[bufferSize];
        matrix.get(0, 0, buffer); // 獲取所有的像素點(diǎn)
        BufferedImage image = new BufferedImage(matrix.cols(), matrix.rows(), type);
        final byte[] targetPixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
        System.arraycopy(buffer, 0, targetPixels, 0, buffer.length);
        return image;
    }
 
    public static List<String> getFilesFromFolder(String folderPath) {
        List<String> fileList = new ArrayList<String>();
        File f = new File(folderPath);
        if (f.isDirectory()) {
            File[] files = f.listFiles();
            for (File singleFile : files) {
                fileList.add(singleFile.getPath());
            }
        }
        return fileList;
    }
 
    public static String randomFileName() {
        StringBuffer fn = new StringBuffer();
        fn.append(System.currentTimeMillis()).append((int) (System.currentTimeMillis() % (10000 - 1) + 1))
                .append(".jpg");
        return fn.toString();
    }
 
    public static List<FileBean> getPicFromFolder(String rootPath) {
        List<FileBean> fList = new ArrayList<FileBean>();
        int fileNum = 0, folderNum = 0;
        File file = new File(rootPath);
        if (file.exists()) {
            LinkedList<File> list = new LinkedList<File>();
            File[] files = file.listFiles();
            for (File file2 : files) {
                if (file2.isDirectory()) {
                    // logger.info(">>>>>>文件夾:" + file2.getAbsolutePath());
                    list.add(file2);
                    folderNum++;
                } else {
                    // logger.info(">>>>>>文件:" + file2.getAbsolutePath());
                    FileBean f = new FileBean();
                    String fileName = file2.getName();
                    String suffix = fileName.substring(fileName.lastIndexOf(".") + 1);
                    File fParent = new File(file2.getParent());
                    String parentFolderName = fParent.getName();
                    f.setFileFullPath(file2.getAbsolutePath());
                    f.setFileType(suffix);
                    f.setFolderName(parentFolderName);
                    fList.add(f);
                    fileNum++;
                }
            }
            File temp_file;
            while (!list.isEmpty()) {
                temp_file = list.removeFirst();
                files = temp_file.listFiles();
                for (File file2 : files) {
                    if (file2.isDirectory()) {
                        // System.out.println("文件夾:" + file2.getAbsolutePath());
                        list.add(file2);
                        folderNum++;
                    } else {
                        // logger.info(">>>>>>文件:" + file2.getAbsolutePath());
                        FileBean f = new FileBean();
                        String fileName = file2.getName();
                        String suffix = fileName.substring(fileName.lastIndexOf(".") + 1);
                        File fParent = new File(file2.getParent());
                        String parentFolderName = fParent.getName();
                        f.setFileFullPath(file2.getAbsolutePath());
                        f.setFileType(suffix);
                        f.setFolderName(parentFolderName);
                        fList.add(f);
                        fileNum++;
                    }
                }
            }
        } else {
            logger.info(">>>>>>文件不存在!");
        }
        // logger.info(">>>>>>文件夾共有:" + folderNum + ",文件共有:" + fileNum);
        return fList;
    }
 
    public static BufferedImage matToBufferedImage(Mat matrix) {
        int cols = matrix.cols();
        int rows = matrix.rows();
        int elemSize = (int) matrix.elemSize();
        byte[] data = new byte[cols * rows * elemSize];
        int type;
        matrix.get(0, 0, data);
        switch (matrix.channels()) {
        case 1:
            type = BufferedImage.TYPE_BYTE_GRAY;
            break;
        case 3:
            type = BufferedImage.TYPE_3BYTE_BGR;
            // bgr to rgb
            byte b;
            for (int i = 0; i < data.length; i = i + 3) {
                b = data[i];
                data[i] = data[i + 2];
                data[i + 2] = b;
            }
            break;
        default:
            return null;
        }
        BufferedImage image2 = new BufferedImage(cols, rows, type);
        image2.getRaster().setDataElements(0, 0, cols, rows, data);
        return image2;
    }
 
    public static Mat bufferedImageToMat(BufferedImage bi) {
        Mat mat = new Mat(bi.getHeight(), bi.getWidth(), CvType.CV_8UC3);
        byte[] data = ((DataBufferByte) bi.getRaster().getDataBuffer()).getData();
        mat.put(0, 0, data);
        return mat;
    }
}

ImageUtils.java

package org.mk.opencv.util;
 
import java.awt.Color;
import java.awt.Graphics;
import java.awt.Graphics2D;
import java.awt.GraphicsConfiguration;
import java.awt.GraphicsDevice;
import java.awt.GraphicsEnvironment;
import java.awt.HeadlessException;
import java.awt.Image;
import java.awt.Transparency;
import java.awt.geom.AffineTransform;
import java.awt.image.AffineTransformOp;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
 
import javax.imageio.ImageIO;
import javax.swing.ImageIcon;
 
import org.opencv.core.Mat;
 
public class ImageUtils {
 
    /**
     * 幾種常見(jiàn)的圖片格式
     */
    public static String IMAGE_TYPE_GIF = "gif";// 圖形交換格式
    public static String IMAGE_TYPE_JPG = "jpg";// 聯(lián)合照片專家組
    public static String IMAGE_TYPE_JPEG = "jpeg";// 聯(lián)合照片專家組
    public static String IMAGE_TYPE_BMP = "bmp";// 英文Bitmap(位圖)的簡(jiǎn)寫,它是Windows操作系統(tǒng)中的標(biāo)準(zhǔn)圖像文件格式
    public static String IMAGE_TYPE_PNG = "png";// 可移植網(wǎng)絡(luò)圖形
    public static String IMAGE_TYPE_PSD = "psd";// Photoshop的專用格式Photoshop
 
    /**
     * 縮放圖像(按高度和寬度縮放)
     * 
     * @param srcImageFile 源圖像文件地址
     * @param result       縮放后的圖像地址
     * @param height       縮放后的高度
     * @param width        縮放后的寬度
     * @param bb           比例不對(duì)時(shí)是否需要補(bǔ)白:true為補(bǔ)白; false為不補(bǔ)白;
     */
    public final synchronized static Image scale2(Mat mat, int height, int width, boolean bb) throws Exception {
        // boolean flg = false;
        Image itemp = null;
        try {
            double ratio = 0.0; // 縮放比例
            // File f = new File(srcImageFile);
            // BufferedImage bi = ImageIO.read(f);
            BufferedImage bi = OpenCVUtil.matToBufferedImage(mat);
            itemp = bi.getScaledInstance(width, height, bi.SCALE_SMOOTH);
            // 計(jì)算比例
            // if ((bi.getHeight() > height) || (bi.getWidth() > width)) {
            // flg = true;
            if (bi.getHeight() > bi.getWidth()) {
                ratio = Integer.valueOf(height).doubleValue() / bi.getHeight();
            } else {
                ratio = Integer.valueOf(width).doubleValue() / bi.getWidth();
            }
            AffineTransformOp op = new AffineTransformOp(AffineTransform.getScaleInstance(ratio, ratio), null);
            itemp = op.filter(bi, null);
            // }
            if (bb) {// 補(bǔ)白
                BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
                Graphics2D g = image.createGraphics();
                g.setColor(Color.white);
                g.fillRect(0, 0, width, height);
                if (width == itemp.getWidth(null))
                    g.drawImage(itemp, 0, (height - itemp.getHeight(null)) / 2, itemp.getWidth(null),
                            itemp.getHeight(null), Color.white, null);
                else
                    g.drawImage(itemp, (width - itemp.getWidth(null)) / 2, 0, itemp.getWidth(null),
                            itemp.getHeight(null), Color.white, null);
                g.dispose();
                itemp = image;
            }
            // if (flg)
            // ImageIO.write((BufferedImage) itemp, "JPEG", new File(result));
        } catch (Exception e) {
            throw new Exception("scale2 error: " + e.getMessage(), e);
        }
        return itemp;
    }
 
    public static BufferedImage toBufferedImage(Image image) {
        if (image instanceof BufferedImage) {
            return (BufferedImage) image;
        }
 
        // 此代碼確保在圖像的所有像素被載入
        image = new ImageIcon(image).getImage();
 
        // 如果圖像有透明用這個(gè)方法
//        boolean hasAlpha = hasAlpha(image);
 
        // 創(chuàng)建一個(gè)可以在屏幕上共存的格式的bufferedimage
        BufferedImage bimage = null;
        GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment();
        try {
            // 確定新的緩沖圖像類型的透明度
            int transparency = Transparency.OPAQUE;
            // if (hasAlpha) {
            transparency = Transparency.BITMASK;
            // }
 
            // 創(chuàng)造一個(gè)bufferedimage
            GraphicsDevice gs = ge.getDefaultScreenDevice();
            GraphicsConfiguration gc = gs.getDefaultConfiguration();
            bimage = gc.createCompatibleImage(image.getWidth(null), image.getHeight(null), transparency);
        } catch (HeadlessException e) {
            // 系統(tǒng)不會(huì)有一個(gè)屏幕
        }
 
        if (bimage == null) {
            // 創(chuàng)建一個(gè)默認(rèn)色彩的bufferedimage
            int type = BufferedImage.TYPE_INT_RGB;
            // int type = BufferedImage.TYPE_3BYTE_BGR;//by wang
            // if (hasAlpha) {
            type = BufferedImage.TYPE_INT_ARGB;
            // }
            bimage = new BufferedImage(image.getWidth(null), image.getHeight(null), type);
        }
 
        // 把圖像復(fù)制到bufferedimage上
        Graphics g = bimage.createGraphics();
 
        // 把圖像畫到bufferedimage上
        g.drawImage(image, 0, 0, null);
        g.dispose();
 
        return bimage;
    }
}

FileBean.java

package org.mk.opencv.util;
 
import java.io.Serializable;
 
public class FileBean implements Serializable {
 
    private String fileFullPath;
    private String folderName;
    private String fileType;
 
    public String getFileType() {
        return fileType;
    }
 
    public void setFileType(String fileType) {
        this.fileType = fileType;
    }
 
    public String getFileFullPath() {
        return fileFullPath;
    }
 
    public void setFileFullPath(String fileFullPath) {
        this.fileFullPath = fileFullPath;
    }
 
    public String getFolderName() {
        return folderName;
    }
 
    public void setFolderName(String folderName) {
        this.folderName = folderName;
    }
 
}

VideoPanel.java

package org.mk.opencv;
 
import java.awt.*;
import java.awt.image.BufferedImage;
import javax.swing.*;
 
import org.mk.opencv.util.ImageUtils;
import org.mk.opencv.util.OpenCVUtil;
import org.opencv.core.Mat;
 
public class VideoPanel extends JPanel {
 
    private Image image;
 
    public void setImageWithMat(Mat mat) {
        image = OpenCVUtil.matToBufferedImage(mat);
        this.repaint();
    }
 
    public void SetImageWithImg(Image img) {
        image = img;
    }
 
    public Mat getMatFromImage() {
        Mat faceMat = new Mat();
        BufferedImage bi = ImageUtils.toBufferedImage(image);
        faceMat = OpenCVUtil.bufferedImageToMat(bi);
        return faceMat;
    }
 
    @Override
    protected void paintComponent(Graphics g) {
        super.paintComponent(g);
        if (image != null)
            g.drawImage(image, 0, 0, image.getWidth(null), image.getHeight(null), this);
    }
 
    public static VideoPanel show(String title, int width, int height, int open) {
        JFrame frame = new JFrame(title);
        if (open == 0) {
            frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
        } else {
            frame.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE);
        }
 
        frame.setSize(width, height);
        frame.setBounds(0, 0, width, height);
        VideoPanel videoPanel = new VideoPanel();
        videoPanel.setSize(width, height);
        frame.setContentPane(videoPanel);
        frame.setVisible(true);
        return videoPanel;
    }
}

TakePhotoProcess.java

package org.mk.opencv.sample;
 
import org.apache.log4j.Logger;
import org.opencv.core.Mat;
import org.opencv.core.Scalar;
import org.opencv.imgcodecs.Imgcodecs;
 
public class TakePhotoProcess extends Thread {
    private static Logger logger = Logger.getLogger(TakePhotoProcess.class);
 
    private String imgPath;
    private Mat faceMat;
    private final static Scalar color = new Scalar(0, 0, 255);
 
    public TakePhotoProcess(String imgPath, Mat faceMat) {
        this.imgPath = imgPath;
        this.faceMat = faceMat;
    }
 
    public void run() {
        try {
            long currentTime = System.currentTimeMillis();
            StringBuffer samplePath = new StringBuffer();
            samplePath.append(imgPath).append(currentTime).append(".jpg");
            Imgcodecs.imwrite(samplePath.toString(), faceMat);
            logger.info(">>>>>>write image into->" + samplePath.toString());
 
        } catch (Exception e) {
            logger.error(e.getMessage(), e);
        }
    }
 
}

FaceRecognize.java(核心主類)

package org.mk.opencv.sample;
 
import java.awt.EventQueue;
 
import javax.swing.JFrame;
import javax.swing.JPanel;
import javax.swing.border.EmptyBorder;
 
import org.apache.log4j.Logger;
import org.mk.opencv.VideoPanel;
import org.mk.opencv.face.TakePhotoProcess;
import org.mk.opencv.util.ImageUtils;
import org.mk.opencv.util.OpenCVUtil;
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.core.MatOfRect;
import org.opencv.core.Rect;
import org.opencv.core.Scalar;
import org.opencv.core.Size;
import org.opencv.imgproc.Imgproc;
import org.opencv.objdetect.CascadeClassifier;
import org.opencv.videoio.VideoCapture;
 
import javax.swing.border.BevelBorder;
import javax.swing.JLabel;
import javax.swing.SwingConstants;
import java.awt.GridLayout;
import java.awt.Image;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
 
import javax.swing.JButton;
 
public class FaceRecognize extends JFrame {
    static {
        System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
    }
    private static Logger logger = Logger.getLogger(FaceRecognize.class);
    private static final String cascadeFileFullPath = "D:\\opencvinstall\\build\\install\\etc\\lbpcascades\\lbpcascade_frontalface.xml";
    private static final String photoPath = "D:\\opencv-demo\\face\\";
    private JPanel contentPane;
    protected static VideoPanel videoCamera = new VideoPanel();
    private static final Size faceSize = new Size(165, 200);
    private static VideoCapture capture = new VideoCapture();
 
    /**
     * Launch the application.
     */
    public static void main(String[] args) {
        FaceRecognize frame = new FaceRecognize();
        frame.setVisible(true);
        frame.invokeCamera(frame, videoCamera);
    }
 
    public void invokeCamera(JFrame frame, VideoPanel videoPanel) {
        new Thread() {
            public void run() {
                CascadeClassifier faceCascade = new CascadeClassifier();
                faceCascade.load(cascadeFileFullPath);
                try {
                    capture.open(0);
                    Scalar color = new Scalar(0, 255, 0);
                    MatOfRect faces = new MatOfRect();
                    // Mat faceFrames = new Mat();
                    if (capture.isOpened()) {
                        logger.info(">>>>>>video camera in working");
                        Mat faceMat = new Mat();
                        while (true) {
                            capture.read(faceMat);
                            if (!faceMat.empty()) {
                                faceCascade.detectMultiScale(faceMat, faces);
                                Rect[] facesArray = faces.toArray();
                                if (facesArray.length >= 1) {
                                    for (int i = 0; i < facesArray.length; i++) {
                                        Imgproc.rectangle(faceMat, facesArray[i].tl(), facesArray[i].br(), color, 2);
                                        videoPanel.setImageWithMat(faceMat);
                                        frame.repaint();
                                        // videoPanel.repaint();
                                    }
                                }
                            } else {
                                logger.info(">>>>>>not found anyinput");
                                break;
                            }
                            Thread.sleep(80);
                        }
                    }
                } catch (Exception e) {
                    logger.error("invoke camera error: " + e.getMessage(), e);
                }
            }
        }.start();
    }
 
    /**
     * Create the frame.
     */
 
    public FaceRecognize() {
 
        setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
        setBounds(100, 100, 1024, 768);
        contentPane = new JPanel();
        contentPane.setBorder(new EmptyBorder(5, 5, 5, 5));
        setContentPane(contentPane);
        contentPane.setLayout(null);
 
        JPanel cameraGroup = new JPanel();
        cameraGroup.setBounds(10, 10, 988, 580);
        contentPane.add(cameraGroup);
        cameraGroup.setLayout(null);
 
        JLabel videoDescriptionLabel = new JLabel("Video");
        videoDescriptionLabel.setHorizontalAlignment(SwingConstants.CENTER);
        videoDescriptionLabel.setBounds(0, 10, 804, 23);
        cameraGroup.add(videoDescriptionLabel);
 
        videoCamera.setBorder(new BevelBorder(BevelBorder.LOWERED, null, null, null, null));
        videoCamera.setBounds(10, 43, 794, 527);
        cameraGroup.add(videoCamera);
 
        // JPanel videoPreview = new JPanel();
        VideoPanel videoPreview = new VideoPanel();
        videoPreview.setBorder(new BevelBorder(BevelBorder.LOWERED, null, null, null, null));
        videoPreview.setBounds(807, 359, 171, 211);
        cameraGroup.add(videoPreview);
 
        JLabel lblNewLabel = new JLabel("Preview");
        lblNewLabel.setHorizontalAlignment(SwingConstants.CENTER);
        lblNewLabel.setBounds(807, 307, 171, 42);
        cameraGroup.add(lblNewLabel);
 
        JPanel buttonGroup = new JPanel();
        buttonGroup.setBounds(65, 610, 710, 35);
        contentPane.add(buttonGroup);
        buttonGroup.setLayout(new GridLayout(1, 0, 0, 0));
 
        JButton photoButton = new JButton("Take Photo");
        photoButton.addActionListener(new ActionListener() {
            public void actionPerformed(ActionEvent e) {
                logger.info(">>>>>>take photo performed");
                StringBuffer photoPathStr = new StringBuffer();
                photoPathStr.append(photoPath);
                try {
                    if (capture.isOpened()) {
                        Mat myFace = new Mat();
                        while (true) {
                            capture.read(myFace);
                            if (!myFace.empty()) {
                                Image previewImg = ImageUtils.scale2(myFace, 165, 200, true);// 等比例縮放
                                TakePhotoProcess takePhoto = new TakePhotoProcess(photoPath.toString(), myFace);
                                takePhoto.start();// 照片寫盤
                                videoPreview.SetImageWithImg(previewImg);// 在預(yù)覽界面里顯示等比例縮放的照片
                                videoPreview.repaint();// 讓預(yù)覽界面重新渲染
                                break;
                            }
                        }
                    }
                } catch (Exception ex) {
                    logger.error(">>>>>>take photo error: " + ex.getMessage(), ex);
                }
            }
        });
        buttonGroup.add(photoButton);
 
        JButton trainButton = new JButton("Train");
        buttonGroup.add(trainButton);
 
        JButton identifyButton = new JButton("Identify");
        buttonGroup.add(identifyButton);
    }
}

到此這篇關(guān)于Java+OpenCV調(diào)用攝像頭實(shí)現(xiàn)拍照功能的文章就介紹到這了,更多相關(guān)Java OpenCV拍照內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!

相關(guān)文章

  • Java 數(shù)據(jù)結(jié)構(gòu)鏈表操作實(shí)現(xiàn)代碼

    Java 數(shù)據(jù)結(jié)構(gòu)鏈表操作實(shí)現(xiàn)代碼

    這篇文章主要介紹了Java 數(shù)據(jù)結(jié)構(gòu)鏈表操作的相關(guān)資料,并附實(shí)例代碼,需要的朋友可以參考下
    2016-10-10
  • 關(guān)于Java應(yīng)用日志與Jaeger的trace關(guān)聯(lián)的問(wèn)題

    關(guān)于Java應(yīng)用日志與Jaeger的trace關(guān)聯(lián)的問(wèn)題

    這篇文章主要介紹了Java應(yīng)用日志如何與Jaeger的trace關(guān)聯(lián),通過(guò)jaeger發(fā)現(xiàn)這十次請(qǐng)求中有一次耗時(shí)特別長(zhǎng),想定位一下具體原因,感興趣的朋友跟隨小編一起看看吧
    2022-01-01
  • SpringBoot中整合knife4j接口文檔的實(shí)踐

    SpringBoot中整合knife4j接口文檔的實(shí)踐

    這篇文章主要介紹了SpringBoot中整合knife4j接口文檔的實(shí)踐,文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧
    2020-09-09
  • springboot+swagger2.10.5+mybatis-plus 入門詳解

    springboot+swagger2.10.5+mybatis-plus 入門詳解

    這篇文章主要介紹了springboot+swagger2.10.5+mybatis-plus 入門,本文通過(guò)實(shí)例圖文相結(jié)合給大家介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或工作具有一定的參考借鑒價(jià)值,需要的朋友可以參考下
    2020-12-12
  • 如何用java生成指定范圍的隨機(jī)數(shù)

    如何用java生成指定范圍的隨機(jī)數(shù)

    以生成[10,20]隨機(jī)數(shù)為例,首先生成0-20的隨機(jī)數(shù),然后對(duì)(20-10+1)取模得到[0-10]之間的隨機(jī)數(shù),然后加上min=10,最后生成的是10-20的隨機(jī)數(shù)
    2013-09-09
  • SpringTask實(shí)現(xiàn)定時(shí)任務(wù)方法講解

    SpringTask實(shí)現(xiàn)定時(shí)任務(wù)方法講解

    通過(guò)重寫Schedu lingConfigurer方法實(shí)現(xiàn)對(duì)定時(shí)任務(wù)的操作,單次執(zhí)行、停止、啟動(dòng)三個(gè)主要的基本功能,動(dòng)態(tài)的從數(shù)據(jù)庫(kù)中獲取配置的定時(shí)任務(wù)cron信息,通過(guò)反射的方式靈活定位到具體的類與方法中
    2023-02-02
  • Java 格式化輸出JSON字符串的2種實(shí)現(xiàn)操作

    Java 格式化輸出JSON字符串的2種實(shí)現(xiàn)操作

    這篇文章主要介紹了Java 格式化輸出JSON字符串的2種實(shí)現(xiàn)操作,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過(guò)來(lái)看看吧
    2020-10-10
  • SpringBoot基于自定義注解實(shí)現(xiàn)切面編程

    SpringBoot基于自定義注解實(shí)現(xiàn)切面編程

    這篇文章主要介紹了SpringBoot基于自定義注解實(shí)現(xiàn)切面編程,文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友可以參考下
    2020-11-11
  • MyBatis中的resultMap簡(jiǎn)要概述

    MyBatis中的resultMap簡(jiǎn)要概述

    這篇文章主要介紹了MyBatis中的resultMap簡(jiǎn)要概述的相關(guān)資料,需要的朋友可以參考下
    2016-07-07
  • 詳解springcloud之服務(wù)注冊(cè)與發(fā)現(xiàn)

    詳解springcloud之服務(wù)注冊(cè)與發(fā)現(xiàn)

    本次分享的是關(guān)于springcloud服務(wù)注冊(cè)與發(fā)現(xiàn)的內(nèi)容,將通過(guò)分別搭建服務(wù)中心,服務(wù)注冊(cè),服務(wù)發(fā)現(xiàn)來(lái)說(shuō)明,非常具有實(shí)用價(jià)值,需要的朋友可以參考下
    2018-06-06

最新評(píng)論