IT数码 购物 网址 头条 软件 日历 阅读 图书馆
TxT小说阅读器
↓语音阅读,小说下载,古典文学↓
图片批量下载器
↓批量下载图片,美女图库↓
图片自动播放器
↓图片自动播放器↓
一键清除垃圾
↓轻轻一点,清除系统垃圾↓
开发: C++知识库 Java知识库 JavaScript Python PHP知识库 人工智能 区块链 大数据 移动开发 嵌入式 开发工具 数据结构与算法 开发测试 游戏开发 网络协议 系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程
数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁
 
   -> 人工智能 -> 人工智能项目:AI监督电脑使用行为 -> 正文阅读

[人工智能]人工智能项目:AI监督电脑使用行为

项目简介:

电子科技大学中山学院2019级人工智能课程设计

????????语言:Python

????????开发工具:Pycharm、Python3.9

????????库:PyQt、OpenCv、dlib

????????功能:

???????????????? 通过人工智能监督用户使用电脑时的姿态与距离。

源码:

?Qt界面:

"""
计算机视觉课程设计
ComputerVisual curriculum design

开发时间:2021年12月8日 下午 19:14
Development time: 2021.12.8 pm 19:14

版权所有?:电子科技大学中山学院-陶凌杰
Property in copyright: University of Electronic Science and Technology of China_ZhongShan college-TonnyJack

仅供参考,禁止商用
For reference only, no commercial use
"""
# -*- coding: utf-8 -*-

# Form implementation generated from reading ui file 'CourseDesign.ui'
#
# Created by: PyQt5 UI code generator 5.15.6
#
# WARNING: Any manual changes made to this file will be lost when pyuic5 is
# run again.  Do not edit this file unless you know what you are doing.

import sys
from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtCore import *
from PyQt5.QtWidgets import *
from PyQt5.QtGui import *

from PyQt5 import QtCore, QtGui, QtWidgets

import demo


class Ui_Dialog(object):
    def setupUi(self, Dialog):
        Dialog.setObjectName("Dialog")
        Dialog.resize(662, 565)
        self.pushButton = QtWidgets.QPushButton(Dialog)
        self.pushButton.setGeometry(QtCore.QRect(151, 150, 171, 51))
        self.pushButton.setObjectName("pushButton")
        self.pushButton_2 = QtWidgets.QPushButton(Dialog)
        self.pushButton_2.setGeometry(QtCore.QRect(341, 150, 171, 51))
        self.pushButton_2.setObjectName("pushButton_2")
        self.pushButton_3 = QtWidgets.QPushButton(Dialog)
        self.pushButton_3.setGeometry(QtCore.QRect(151, 210, 171, 51))
        self.pushButton_3.setObjectName("pushButton_3")
        self.pushButton_4 = QtWidgets.QPushButton(Dialog)
        self.pushButton_4.setGeometry(QtCore.QRect(341, 210, 171, 51))
        self.pushButton_4.setObjectName("pushButton_4")
        self.pushButton_5 = QtWidgets.QPushButton(Dialog)
        self.pushButton_5.setGeometry(QtCore.QRect(151, 270, 171, 51))
        self.pushButton_5.setObjectName("pushButton_5")
        self.pushButton_6 = QtWidgets.QPushButton(Dialog)
        self.pushButton_6.setGeometry(QtCore.QRect(341, 270, 171, 51))
        self.pushButton_6.setObjectName("pushButton_6")
        self.label = QtWidgets.QLabel(Dialog)
        self.label.setGeometry(QtCore.QRect(230, 40, 241, 61))
        self.label.setLayoutDirection(QtCore.Qt.LeftToRight)
        self.label.setObjectName("label")
        self.textBrowser = QtWidgets.QTextBrowser(Dialog)
        self.textBrowser.setGeometry(QtCore.QRect(90, 350, 501, 191))
        self.textBrowser.setObjectName("textBrowser")

        self.retranslateUi(Dialog)
        self.pushButton.clicked.connect(self.push_buttom1)
        self.pushButton_2.clicked.connect(self.push_buttom2)
        self.pushButton_3.clicked.connect(self.push_buttom3)
        self.pushButton_4.clicked.connect(self.push_buttom4)
        self.pushButton_5.clicked.connect(self.push_buttom5)
        self.pushButton_6.clicked.connect(self.push_buttom6)
        QtCore.QMetaObject.connectSlotsByName(Dialog)
        Dialog.show()

    def retranslateUi(self, Dialog):
        _translate = QtCore.QCoreApplication.translate
        Dialog.setWindowTitle(_translate("Dialog", "AiProgramByZSC_TonnyJack"))
        self.pushButton.setText(_translate("Dialog", "数据采集"))
        self.pushButton_2.setText(_translate("Dialog", "训练模型"))
        self.pushButton_3.setText(_translate("Dialog", "人脸识别"))
        self.pushButton_4.setText(_translate("Dialog", "面部标点"))
        self.pushButton_5.setText(_translate("Dialog", "梯形方向"))
        self.pushButton_6.setText(_translate("Dialog", "三维坐标"))
        self.label.setText(_translate("Dialog",
                                      "<html><head/><body><p><span style=\" "
                                      "font-size:14pt;\">欢迎使用TonnyJack的程序</span></p></body></html>"))
        self.textBrowser.setHtml(_translate("Dialog",
                                            "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.0//EN\" "
                                            "\"http://www.w3.org/TR/REC-html40/strict.dtd\">\n "
                                            "<html><head><meta name=\"qrichtext\" content=\"1\" /><style "
                                            "type=\"text/css\">\n "
                                            "p, li { white-space: pre-wrap; }\n"
                                            "</style></head><body style=\" font-family:\'SimSun\'; font-size:9pt; "
                                            "font-weight:400; font-style:normal;\">\n "
                                            "<p style=\" margin-top:0px; margin-bottom:0px; margin-left:0px; "
                                            "margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" "
                                            "font-size:11pt;\">这是一个由Opencv和Python开发的人工智能程序</span></p>\n "
                                            "<p style=\"-qt-paragraph-type:empty; margin-top:0px; margin-bottom:0px; "
                                            "margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px; "
                                            "font-size:11pt;\"><br /></p>\n "
                                            "<p style=\" margin-top:0px; margin-bottom:0px; margin-left:0px; "
                                            "margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" "
                                            "font-size:11pt;\">部分代码来自:抖音-恩培</span></p>\n "
                                            "<p style=\"-qt-paragraph-type:empty; margin-top:0px; margin-bottom:0px; "
                                            "margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px; "
                                            "font-size:11pt;\"><br /></p>\n "
                                            "<p style=\" margin-top:0px; margin-bottom:0px; margin-left:0px; "
                                            "margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" "
                                            "font-size:11pt;\">代码已开源并发布在个人博客:</span></p>\n "
                                            "<p style=\"-qt-paragraph-type:empty; margin-top:0px; margin-bottom:0px; "
                                            "margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px; "
                                            "font-size:11pt;\"><br /></p>\n "
                                            "<p style=\" margin-top:0px; margin-bottom:0px; margin-left:0px; "
                                            "margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" "
                                            "font-size:11pt;\">本程序仅供学习与参考,禁止商用!</span></p>\n "
                                            "<p style=\"-qt-paragraph-type:empty; margin-top:0px; margin-bottom:0px; "
                                            "margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px; "
                                            "font-size:11pt;\"><br /></p>\n "
                                            "<p style=\" margin-top:0px; margin-bottom:0px; margin-left:0px; "
                                            "margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" "
                                            "font-size:11pt;\">版权所有?:电子科技大学中山学院-陶凌杰</span></p>\n "
                                            "<p style=\"-qt-paragraph-type:empty; margin-top:0px; margin-bottom:0px; "
                                            "margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px; "
                                            "font-size:11pt;\"><br /></p></body></html>"))

    def push_buttom1(self):
        self.m = demo.MonitorBabay()
        self.m.collectFacesFromCamera(1, 3, 3)

    def push_buttom2(self):
        self.m = demo.MonitorBabay()
        self.m.train()

    def push_buttom3(self):
        self.m = demo.MonitorBabay()
        self.m.run(960, 720, 1)

    def push_buttom4(self):
        self.m = demo.MonitorBabay()
        self.m.run(960, 720, 2)

    def push_buttom5(self):
        self.m = demo.MonitorBabay()
        self.m.run(960, 720, 3)

    def push_buttom6(self):
        self.m = demo.MonitorBabay()
        self.m.run(960, 720, 5)



if __name__ == "__main__":
    app = QApplication(sys.argv)
    form = QWidget()
    ui = Ui_Dialog()
    ui.setupUi(form)
    form.show()
    sys.exit(app.exec_())

主程序:

"""
主要功能:检测孩子是否在看电视,看了多久,距离多远
使用技术点:人脸检测、人脸识别(采集照片、训练、识别)、姿态估计
"""
import os
from argparse import ArgumentParser

import cv2
import dlib
import time

from pose_estimator import PoseEstimator
from utils import Utils


class MonitorBabay:
    def __init__(self):
        # 人脸检测
        self.face_detector = dlib.get_frontal_face_detector()
        # 人脸识别模型:pip uninstall opencv-python,pip install opencv-contrib-python
        self.face_model = cv2.face.LBPHFaceRecognizer_create()

        # 人脸68个关键点
        self.landmark_predictor = dlib.shape_predictor("./assets/shape_predictor_68_face_landmarks.dat")

        # 站在1.5M远处,左眼最左边距离右眼最右边的像素距离(请使用getEyePixelDist方法校准,然后修改这里的值)
        self.eyeBaseDistance = 65

        # pose_estimator.show_3d_model()

        self.utils = Utils()

    # 采集照片用于训练
    # 参数
    # label_index: label的索引
    # save_interval:隔几秒存储照片
    # save_num:存储总量
    def collectFacesFromCamera(self, label_index, save_interval, save_num):
        cap = cv2.VideoCapture(0)

        width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
        height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)

        fpsTime = time.time()
        last_save_time = fpsTime
        saved_num = 0
        while True:
            _, frame = cap.read()
            frame = cv2.flip(frame, 1)
            gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

            faces = self.face_detector(gray)

            for face in faces:

                if saved_num < save_num:
                    if (time.time() - last_save_time) > save_interval:
                        self.utils.save_face(face, frame, label_index)
                        saved_num += 1
                        last_save_time = time.time()

                        print('label_index:{index},成功采集第{num}张照片'.format(index=label_index, num=saved_num))
                else:
                    print('照片采集完毕!')
                    exit()

                self.utils.draw_face_box(face, frame, '', '', '')

            cTime = time.time()
            fps_text = 1 / (cTime - fpsTime)
            fpsTime = cTime

            frame = self.utils.cv2AddChineseText(frame, "帧率: " + str(int(fps_text)), (10, 30), textColor=(0, 255, 0),
                                                 textSize=50)
            frame = cv2.resize(frame, (int(width) // 2, int(height) // 2))
            cv2.imshow('Collect data', frame)
            if cv2.waitKey(5) & 0xFF == 27:
                break
        cap.release()

    # 训练人脸模型
    def train(self):
        print('训练开始!')
        label_list, img_list = self.utils.getFacesLabels()
        self.face_model.train(img_list, label_list)
        self.face_model.save("./face_model/model.yml")
        print('训练完毕!')

    # 获取两个眼角像素距离
    def getEyePixelDist(self):

        cap = cv2.VideoCapture(0)

        width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
        height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)

        # 姿态估计
        self.pose_estimator = PoseEstimator(img_size=(height, width))

        fpsTime = time.time()

        while True:
            _, frame = cap.read()
            frame = cv2.flip(frame, 1)
            gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

            faces = self.face_detector(gray)

            pixel_dist = 0

            for face in faces:
                # 关键点
                landmarks = self.landmark_predictor(gray, face)
                image_points = self.pose_estimator.get_image_points(landmarks)

                left_x = int(image_points[36][0])
                left_y = int(image_points[36][1])
                right_x = int(image_points[45][0])
                right_y = int(image_points[45][1])

                pixel_dist = abs(right_x - left_x)

                cv2.circle(frame, (left_x, left_y), 8, (255, 0, 255), -1)
                cv2.circle(frame, (right_x, right_y), 8, (255, 0, 255), -1)

                # 人脸框
                frame = self.utils.draw_face_box(face, frame, '', '', '')

            cTime = time.time()
            fps_text = 1 / (cTime - fpsTime)
            fpsTime = cTime

            frame = self.utils.cv2AddChineseText(frame, "帧率: " + str(int(fps_text)), (20, 30), textColor=(0, 255, 0),
                                                 textSize=50)
            frame = self.utils.cv2AddChineseText(frame, "像素距离: " + str(int(pixel_dist)), (20, 100),
                                                 textColor=(0, 255, 0), textSize=50)

            # frame = cv2.resize(frame, (int(width)//2, int(height)//2) )
            cv2.imshow('Baby wathching TV', frame)
            if cv2.waitKey(5) & 0xFF == 27:
                break
        cap.release()

    # 运行主程序
    def run(self, w, h, display):
        model_path = "./face_model/model.yml"
        if not os.path.exists(model_path):
            print('人脸识别模型文件不存在,请先采集训练')
            exit()

        label_zh = self.utils.loadLablZh()

        self.face_model.read(model_path)

        cap = cv2.VideoCapture(0)

        width = w
        height = h

        print(width, height)

        # 姿态估计
        self.pose_estimator = PoseEstimator(img_size=(height, width))

        fpsTime = time.time()

        zh_name = ''
        x_label = ''
        z_label = ''
        is_watch = ''
        angles = [0, 0, 0]
        person_distance = 0
        watch_start_time = fpsTime
        watch_duration = 0

        # fps = 12 videoWriter = cv2.VideoWriter('./record_video/out'+str(time.time())+'.mp4',
        # cv2.VideoWriter_fourcc(*'H264'), fps, (width,height))

        while True:
            _, frame = cap.read()
            frame = cv2.resize(frame, (width, height))
            frame = cv2.flip(frame, 1)
            gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

            faces = self.face_detector(gray)

            for face in faces:

                x1, y1, x2, y2 = self.utils.getFaceXY(face)
                face_img = gray[y1:y2, x1:x2]

                try:
                    # 人脸识别
                    idx, confidence = self.face_model.predict(face_img)
                    zh_name = label_zh[str(idx)]
                except cv2.error:
                    print('cv2.error')

                # 关键点
                landmarks = self.landmark_predictor(gray, face)
                # 计算旋转矢量
                rotation_vector, translation_vector = self.pose_estimator.solve_pose_by_68_points(landmarks)

                # 计算距离
                person_distance = round(self.pose_estimator.get_distance(self.eyeBaseDistance), 2)

                # 计算角度
                rmat, jac = cv2.Rodrigues(rotation_vector)
                angles, mtxR, mtxQ, Qx, Qy, Qz = cv2.RQDecomp3x3(rmat)

                if angles[1] < -15:
                    x_label = '左'
                elif angles[1] > 15:
                    x_label = '右'
                else:
                    x_label = '前'

                if angles[0] < -15:
                    z_label = "下"
                elif angles[0] > 15:
                    z_label = "上"
                else:
                    z_label = "中"

                is_watch = '是' if (x_label == '前' and z_label == '中') else '否'

                if is_watch == '是':
                    now = time.time()
                    watch_duration += (now - watch_start_time)

                watch_start_time = time.time()

                if display == 1:
                    # 人脸框
                    frame = self.utils.draw_face_box(face, frame, zh_name, is_watch, person_distance)
                if display == 2:
                    # 68个关键点
                    self.utils.draw_face_points(landmarks, frame)
                if display == 3:
                    # 梯形方向
                    self.pose_estimator.draw_annotation_box(
                        frame, rotation_vector, translation_vector, is_watch)
                if display == 4:
                    # 指针
                    self.pose_estimator.draw_pointer(frame, rotation_vector, translation_vector)
                if display == 5:
                    # 三维坐标系
                    self.pose_estimator.draw_axes(frame, rotation_vector, translation_vector)

                # 仅测试单人
                break

            cTime = time.time()
            fps_text = 1 / (cTime - fpsTime)
            fpsTime = cTime

            frame = self.utils.cv2AddChineseText(frame, "帧率: " + str(int(fps_text)), (20, 30), textColor=(0, 255, 0),
                                                 textSize=50)

            color = (255, 0, 255) if person_distance <= 1 else (0, 255, 0)

            frame = self.utils.cv2AddChineseText(frame, "距离: " + str(person_distance) + "m", (20, 100), textColor=color,
                                                 textSize=50)

            color = (255, 0, 255) if is_watch == '是' else (0, 255, 0)

            frame = self.utils.cv2AddChineseText(frame, "观看: " + str(is_watch), (20, 170), textColor=color, textSize=50)

            #
            duration_str = str(round((watch_duration / 60), 2)) + "min"

            frame = self.utils.cv2AddChineseText(frame, "时长: " + duration_str, (20, 240), textColor=(0, 255, 0),
                                                 textSize=50)

            color = (255, 0, 255) if x_label == '前' else (0, 255, 0)

            frame = self.utils.cv2AddChineseText(frame, "X轴: {degree}° {x_label} ".format(x_label=str(x_label),
                                                                                          degree=str(int(angles[1]))),
                                                 (20, height - 220), textColor=color, textSize=40)

            color = (255, 0, 255) if z_label == '中' else (0, 255, 0)

            frame = self.utils.cv2AddChineseText(frame, "Z轴: {degree}° {z_label}".format(z_label=str(z_label),
                                                                                         degree=str(int(angles[0]))),
                                                 (20, height - 160), textColor=color, textSize=40)

            frame = self.utils.cv2AddChineseText(frame, "Y轴: {degree}°".format(degree=str(int(angles[2]))),
                                                 (20, height - 100), textColor=(0, 255, 0), textSize=40)

            # videoWriter.write(frame)
            # frame = cv2.resize(frame, (int(width)//2, int(height)//2) )
            cv2.imshow('Baby wathching TV', frame)
            if cv2.waitKey(5) & 0xFF == 27:
                break
        cap.release()


if __name__ == "__main__":
    m = MonitorBabay()
    m.run(960, 720, 5)

# 参数
# parser = ArgumentParser()
# parser.add_argument("--mode", type=str, default='run',
#                     help="运行模式:collect,train,distance,run对应:采集、训练、评估距离、主程序")
# parser.add_argument("--label_id", type=int, default=1,
#                     help="采集照片标签id.")
# parser.add_argument("--img_count", type=int, default=3,
#                     help="采集照片数量")
# parser.add_argument("--img_interval", type=int, default=3,
#                     help="采集照片间隔时间")
#
# parser.add_argument("--display", type=int, default=1,
#                     help="显示模式,取值1-5")
#
# parser.add_argument("--w", type=int, default=960,
#                     help="画面宽度")
# parser.add_argument("--h", type=int, default=720,
#                     help="画面高度")
# args = parser.parse_args()
#
# mode = args.mode
#
# if mode == 'collect':
#     print("即将采集照片.")
#     if args.label_id and args.img_count and args.img_interval:
#         m.collectFacesFromCamera(args.label_id, args.img_interval, args.img_count)
#
# if mode == 'train':
#     m.train()
#
# if mode == 'distance':
#     m.getEyePixelDist()
#
# if mode == 'run':
#     m.run(args.w, args.h, args.display)

封装姿态判断工具:

"""
封装常用工具,降低Demo复杂度

"""
import cv2
import numpy as np


class PoseEstimator:
    """Estimate head pose according to the facial landmarks"""

    def __init__(self, img_size=(480, 640)):
        self.size = img_size

        self.model_points_68 = self._get_full_model_points()
        self.image_points = None

        # Camera internals
        self.focal_length = self.size[1]
        self.camera_center = (self.size[1] / 2, self.size[0] / 2)
        self.camera_matrix = np.array(
            [[self.focal_length, 0, self.camera_center[0]],
             [0, self.focal_length, self.camera_center[1]],
             [0, 0, 1]], dtype="double")

        # Assuming no lens distortion
        self.dist_coeefs = np.zeros((4, 1))

        # Rotation vector and translation vector
        self.r_vec = np.array([[0.01891013], [0.08560084], [-3.14392813]])
        self.t_vec = np.array(
            [[-14.97821226], [-10.62040383], [-2053.03596872]])
        # self.r_vec = None
        # self.t_vec = None

    def _get_full_model_points(self, filename='assets/model.txt'):
        """Get all 68 3D model points from file"""
        raw_value = []
        with open(filename) as file:
            for line in file:
                raw_value.append(line)
        model_points = np.array(raw_value, dtype=np.float32)
        model_points = np.reshape(model_points, (3, -1)).T

        # Transform the model into a front view.
        model_points[:, 2] *= -1

        return model_points

    def show_3d_model(self):
        import matplotlib.pyplot as plt

        x = self.model_points_68[:, 0]
        y = self.model_points_68[:, 1]
        z = self.model_points_68[:, 2]

        fig = plt.figure()
        ax = plt.axes(projection="3d")

        ax.scatter3D(x, y, z)
        plt.show()

    # 利用相似三角形估算距离

    def get_distance(self, eyeBaseDistance):
        image_points = self.image_points
        left_x = int(image_points[36][0])
        right_x = int(image_points[45][0])

        pixel_dist = abs(right_x - left_x)

        return (eyeBaseDistance / pixel_dist) * 1.5

    def get_image_points(self, landmarks):
        landmarks_list = []
        for n in range(0, 68):
            x = landmarks.part(n).x
            y = landmarks.part(n).y
            landmarks_list.append([x, y])

        image_points = np.array(landmarks_list, dtype="double")

        self.image_points = image_points

        return image_points

    def solve_pose_by_68_points(self, landmarks):

        image_points = self.get_image_points(landmarks)
        """
        Solve pose from all the 68 image points
        Return (rotation_vector, translation_vector) as pose.
        """

        if self.r_vec is None:
            (_, rotation_vector, translation_vector) = cv2.solvePnP(
                self.model_points_68, image_points, self.camera_matrix, self.dist_coeefs)
            self.r_vec = rotation_vector
            self.t_vec = translation_vector

        (_, rotation_vector, translation_vector) = cv2.solvePnP(
            self.model_points_68,
            image_points,
            self.camera_matrix,
            self.dist_coeefs,
            rvec=self.r_vec,
            tvec=self.t_vec,
            useExtrinsicGuess=True)

        return (rotation_vector, translation_vector)

    def draw_annotation_box(self, image, rotation_vector, translation_vector, is_watch, line_width=2):
        """Draw a 3D box as annotation of pose"""
        point_3d = []
        rear_size = 75
        rear_depth = 0
        point_3d.append((-rear_size, -rear_size, rear_depth))
        point_3d.append((-rear_size, rear_size, rear_depth))
        point_3d.append((rear_size, rear_size, rear_depth))
        point_3d.append((rear_size, -rear_size, rear_depth))
        point_3d.append((-rear_size, -rear_size, rear_depth))

        front_size = 100
        front_depth = 100
        point_3d.append((-front_size, -front_size, front_depth))
        point_3d.append((-front_size, front_size, front_depth))
        point_3d.append((front_size, front_size, front_depth))
        point_3d.append((front_size, -front_size, front_depth))
        point_3d.append((-front_size, -front_size, front_depth))
        point_3d = np.array(point_3d, dtype=np.float).reshape(-1, 3)

        # Map to 2d image points
        (point_2d, _) = cv2.projectPoints(point_3d,
                                          rotation_vector,
                                          translation_vector,
                                          self.camera_matrix,
                                          self.dist_coeefs)
        point_2d = np.int32(point_2d.reshape(-1, 2))

        color = (255, 0, 255) if is_watch == '是' else (0, 255, 0)

        # Draw all the lines
        cv2.polylines(image, [point_2d], True, color, line_width, cv2.LINE_AA)
        cv2.line(image, tuple(point_2d[1]), tuple(
            point_2d[6]), color, line_width, cv2.LINE_AA)
        cv2.line(image, tuple(point_2d[2]), tuple(
            point_2d[7]), color, line_width, cv2.LINE_AA)
        cv2.line(image, tuple(point_2d[3]), tuple(
            point_2d[8]), color, line_width, cv2.LINE_AA)

    # 坐标系
    def draw_axes(self, img, R, t):
        img = cv2.drawFrameAxes(img, self.camera_matrix, self.dist_coeefs, R, t, 200)

    # 方向指针
    def draw_pointer(self, img, R, t):
        point1 = (int(self.image_points[30][0]), int(self.image_points[30][1]))

        nose_end_point2D, jacobian = cv2.projectPoints(np.array([(0.0, 0.0, 300.0)]), R, t, self.camera_matrix,
                                                       self.dist_coeefs)

        point2 = (int(nose_end_point2D[0][0][0]), int(nose_end_point2D[0][0][1]))

        cv2.line(img, point1, point2, (0, 255, 0), 4)

封装常用工具:

"""
封装常用工具,降低Demo复杂度
"""
# 导入PIL
from PIL import Image, ImageDraw, ImageFont
# 导入OpenCV
import cv2
from matplotlib.pyplot import xlabel
import numpy as np
import time
import os
import glob

from sys import platform as _platform


class Utils:
    def __init__(self):
        pass

    # 添加中文
    def cv2AddChineseText(self, img, text, position, textColor=(0, 255, 0), textSize=30):
        if isinstance(img, np.ndarray):  # 判断是否OpenCV图片类型
            img = Image.fromarray(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
        # 创建一个可以在给定图像上绘图的对象
        draw = ImageDraw.Draw(img)
        # 字体的格式
        fontStyle = ImageFont.truetype(
            "./fonts/simsun.ttc", textSize, encoding="utf-8")
        # 绘制文本
        draw.text(position, text, textColor, font=fontStyle)
        # 转换回OpenCV格式
        return cv2.cvtColor(np.asarray(img), cv2.COLOR_RGB2BGR)

    def getFaceXY(self, face):
        x1 = face.left()
        y1 = face.top()
        x2 = face.right()
        y2 = face.bottom()

        return x1, y1, x2, y2

    # 人脸框框
    def draw_face_box(self, face, frame, zh_name, is_watch, distance):

        color = (255, 0, 255) if is_watch == '是' else (0, 255, 0)
        x1, y1, x2, y2 = self.getFaceXY(face)

        frame = cv2.rectangle(frame, (x1, y1), (x2, y2), color, 2)
        m_str = '' if distance == '' else 'm'
        frame = self.cv2AddChineseText(frame,
                                       "{name} {distance}{m_str}".format(name=zh_name, distance=distance, m_str=m_str),
                                       (x1, y1 - 60), textColor=color, textSize=50)

        return frame

    # 保存人脸照片
    def save_face(self, face, frame, face_label_index):

        path = './face_imgs/' + str(face_label_index)
        if not os.path.exists(path):
            os.makedirs(path)

        x1, y1, x2, y2 = self.getFaceXY(face)
        face_img = frame[y1:y2, x1:x2]

        filename = path + '/' + str(face_label_index) + '-' + str(time.time()) + '.jpg'

        cv2.imwrite(filename, face_img)

    # 人脸点
    def draw_face_points(self, landmarks, frame):
        for n in range(0, 68):
            x = landmarks.part(n).x
            y = landmarks.part(n).y
            cv2.circle(frame, (x, y), 4, (255, 0, 255), -1)

    # 获取人脸和label数据
    def getFacesLabels(self):
        # 遍历所有文件
        label_list = []
        img_list = []
        for file_path in glob.glob('./face_imgs/*/*'):

            dir_str = '/'
            if _platform == "linux" or _platform == "linux2":
                # linux
                pass
            elif _platform == "darwin":
                # MAC OS X
                pass
            elif _platform == "win32":
                # Windows
                dir_str = '\\'
            elif _platform == "win64":
                # Windows 64-bit
                dir_str = '\\'

            label_list.append(int(file_path.split(dir_str)[-1].split('-')[0]))

            img = cv2.imread(file_path)
            # img = cv2.resize(img, (224, 224))
            img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
            img_list.append(img)

        return np.array(label_list), img_list

    # 加载label 对应中文
    def loadLablZh(self):
        with open('./face_model/label.txt', encoding='utf-8') as f:
            back_dict = {}
            for line in f.readlines():
                label_index = line.split(',')[0]
                label_zh = line.split(',')[1].split('\n')[0]
                back_dict[label_index] = label_zh

            return back_dict

使用方法:

? ? ? ? ? ? ? ? 一:搜索Python,进入官网,下载Python3.9解释器

? ? ? ? ? ? ? ? 二:搜索Pycharm,进入官网,下载Pycharm community并安装

? ? ? ? ? ? ? ? 三:在Pycharm中配置python解释器,打开终端,输入以下三个命令

pip install OpenCV-python -i https://mirrors.aliyun.com/pypi/simple/

pip install mediapipe -i https://mirrors.aliyun.com/pypi/simple/

pip install PyQt5==5.10.1 -i https://mirrors.aliyun.com/pypi/simple/

? ? ? ? ? ? ? ? 四:将三个代码文件放在同一文件目录下,运行QT界面的代码

  人工智能 最新文章
2022吴恩达机器学习课程——第二课(神经网
第十五章 规则学习
FixMatch: Simplifying Semi-Supervised Le
数据挖掘Java——Kmeans算法的实现
大脑皮层的分割方法
【翻译】GPT-3是如何工作的
论文笔记:TEACHTEXT: CrossModal Generaliz
python从零学(六)
详解Python 3.x 导入(import)
【答读者问27】backtrader不支持最新版本的
上一篇文章      下一篇文章      查看所有文章
加:2022-01-03 16:05:23  更:2022-01-03 16:08:18 
 
开发: C++知识库 Java知识库 JavaScript Python PHP知识库 人工智能 区块链 大数据 移动开发 嵌入式 开发工具 数据结构与算法 开发测试 游戏开发 网络协议 系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程
数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁

360图书馆 购物 三丰科技 阅读网 日历 万年历 2024年11日历 -2024/11/26 22:29:45-

图片自动播放器
↓图片自动播放器↓
TxT小说阅读器
↓语音阅读,小说下载,古典文学↓
一键清除垃圾
↓轻轻一点,清除系统垃圾↓
图片批量下载器
↓批量下载图片,美女图库↓
  网站联系: qq:121756557 email:121756557@qq.com  IT数码