智能换脸软件
基本信息介绍
软件名称
软件名称是Picture Faceswap,表示图片换脸,是一款图片换脸软件。
软件功能
已知一幅A的人脸图像,新输入一张B的人脸图像,按下换脸键后,将A的图像自动地换成B的人脸。
项目地址
项目地址:https://github.com/StuGeek/Digital-Media-Homework/tree/master/HW3
软件使用说明
软件运行依赖库
OpenCV、dlib、PyQt5、numpy、PIL等库
软件使用步骤
运行指令:python main.py
- 首先按下 “Choose face picture” 和 “Choose head picture” 键选择脸部图像和头部图像;
- 然后按下 “Swap” 键进行换脸,换脸后的图像会在 “Result” 框图中展现;
- 按下 “Save result as” 可以保存换脸后的图片;
- 按下 “clear” 键会清除所有的图片;
- 按下 “Exit” 键退出软件。
界面
项目地址
效果
测试结果1:
测试结果2:
关键程序和算法
- 获取图像中的面部特征点:
- 获取人脸遮罩;
- 获取仿射变换矩阵;
- 利用仿射变换将脸部图像遮罩映射到头部图片中,得到符合头部图片坐标的新的脸部图像遮罩;
- 利用仿射变换将脸部图像映射到头部图片中;
- 修正由于图像肤色和光线的不同导致脸部覆盖区域边缘的不连续问题;
- 将两个图像的人脸遮罩进行结合;
- 输出换脸图像。
其中1、2为人脸图像检测部分,3、4、5、6、7、8为人脸图像转换部分
1.获取图像中的面部特征点:
def acquire_landmarks(image):
detector = dlib.get_frontal_face_detector()
faces = detector(image, 1)
if len(faces) == 0:
raise ZeroFaces
if len(faces) > 1:
raise MoreThanOneFaces
predictor = dlib.shape_predictor("./predictor.dat")
shape = predictor(image, faces[0])
matrix = numpy.matrix([[point.x, point.y] for point in shape.parts()])
return matrix
首先可以从dlib库中下载一个预训练模型,以此构建特征提取器,dlib库中获取的人脸检测器能够获取人脸的一个边界框矩形列表,每个矩形列表代表一个人脸,并作为特征提取器的输入最后返回特征点矩阵,每个特征点对应图像的一个x,y坐标。
2.获取人脸遮罩:
def acquire_shade(image, landmarks):
shape = image.shape[:2]
image = numpy.zeros(shape, numpy.float64)
brow_and_eye_points = list(range(17, 48))
nose_and_mouth_points = list(range(27, 35)) + list(range(48, 61))
brow_and_eye_landmarks = cv2.convexHull(landmarks[brow_and_eye_points])
nose_and_mouth_landmarks = cv2.convexHull(landmarks[nose_and_mouth_points])
cv2.fillConvexPoly(image, brow_and_eye_landmarks, 1)
cv2.fillConvexPoly(image, nose_and_mouth_landmarks, 1)
image = numpy.array([image, image, image]).transpose((1, 2, 0))
plume_amount = 11
image = (cv2.GaussianBlur(image, (plume_amount, plume_amount), 0) > 0) * 1.0
return cv2.GaussianBlur(image, (plume_amount, plume_amount), 0)
在用dlib的库检测人脸的特征点时,可以得到一个由68个点组成的映射,将这些映射组合起来,可以识别人脸的不同的部位:
可以看到,其中:
- 点42~47对应左眼;
- 点36~41对应右眼;
- 点22~26对应左眼眉毛;
- 点17~21对应右眼眉毛;
- 点27~34对应鼻子;
- 点48~60对应嘴巴。
那么,遮罩要用到的区域由两个区域组成,分别是由眉毛和眼睛组成的区域,以及鼻子和嘴巴组成的区域,识别区域分别为点17 ~ 47和点27 ~ 34、48 ~ 60,最后识别出的区域由这两个区域构成,点为17 ~ 60。
3.获取仿射变换矩阵:
def acquire_aff_tra_matrix(head_landmarks, face_landmarks):
head_landmarks = head_landmarks[list(range(17, 61))]
face_landmarks = face_landmarks[list(range(17, 61))]
head_landmarks = head_landmarks.astype(numpy.float64)
col_aver1 = numpy.mean(head_landmarks, 0)
head_landmarks = head_landmarks - col_aver1
SD1 = numpy.std(head_landmarks)
head_landmarks = head_landmarks / SD1
face_landmarks = face_landmarks.astype(numpy.float64)
col_aver2 = numpy.mean(face_landmarks, 0)
face_landmarks = face_landmarks - col_aver2
SD2 = numpy.std(face_landmarks)
face_landmarks = face_landmarks / SD2
col_aver1 = col_aver1.T
col_aver2 = col_aver2.T
SD_div = SD2 / SD1 * 1.0
head_landmarks_tra = head_landmarks.T
u, s, vh = numpy.linalg.svd(head_landmarks_tra * face_landmarks)
R = u * vh
R = R.T
return numpy.vstack([numpy.hstack((SD_div * R, col_aver2 - SD_div * R * col_aver1)), numpy.matrix([0., 0., 1.])])
得到了脸部图片和头部图片的两个面部特征点矩阵之后,需要确定一个对应关系,使脸部特征点的向量通过映射转换之后得到的新向量与头部特征点的向量尽可能相似,这样人脸图像迁移时才会更加准确,转换为数学问题就是,寻找一个标量s,二维向量T,正交矩阵R,使:
∑
i
=
0
n
∣
∣
s
R
p
i
+
T
?
q
i
∣
∣
2
(
n
=
67
)
\sum\limits_{i=0}^n||sR\mathbf{p_i} + T - \mathbf{q_i}||^2 (n = 67)
i=0∑n?∣∣sRpi?+T?qi?∣∣2(n=67)
表达式结果最小(
p
i
p_i
pi?和
q
i
q_i
qi?是两个面部特征点矩阵的第i行)。
可以通过普式分析法来解决这类问题,通过减去质心,按标准差缩放,然后使用奇异值分解计算旋转得到仿射变换矩阵[s * R | T],从而解决这一问题。
4、5.仿射变换映射图片:
def warpAffine_face(face_image, aff_tra_matrix, head_image):
warpAffine_image = numpy.zeros(head_image.shape, face_image.dtype)
width = head_image.shape[1]
height = head_image.shape[0]
cv2.warpAffine(face_image, aff_tra_matrix[:2],
(width, height), warpAffine_image,
borderMode=cv2.BORDER_TRANSPARENT,
flags=cv2.WARP_INVERSE_MAP)
return warpAffine_image
6.修正图像边缘函数:
def revise_edge(head_image, face_image, head_landmarks):
left_eye_points_col_aver = numpy.mean(head_landmarks[list(range(42, 48))], 0)
right_eye_points_col_aver = numpy.mean(head_landmarks[list(range(36, 42))], 0)
eye_points_col_aver_sub = left_eye_points_col_aver - right_eye_points_col_aver
blur_fra = 0.6
kernel_size = int(blur_fra * numpy.linalg.norm(eye_points_col_aver_sub)) + 1
if kernel_size % 2 == 0:
kernel_size = kernel_size - 1
head_image_blur = cv2.GaussianBlur(head_image, (kernel_size, kernel_size), 0)
face_image_blur = cv2.GaussianBlur(face_image, (kernel_size, kernel_size), 0)
face_image_blur = face_image_blur + 128 * (face_image_blur <= 1.0)
face_image = face_image.astype(numpy.float64)
head_image_blur = head_image_blur.astype(numpy.float64)
face_image_blur = face_image_blur.astype(numpy.float64)
return face_image * head_image_blur / face_image_blur
可以通过改变脸部图像的颜色,用RGB缩放颜色,使其更加接近面部头像,从而解决边缘问题,这里使用了一个模糊算法,通过一个模糊系数乘以两眼之间的距离作为高斯内核,得到两个图像的高斯模糊值,然后用脸部图像乘以头部图像的高斯模糊值,再除以脸部图像的高斯模糊值,返回修正边缘后的图像。
7、8.和“Swap”按钮相关联的换脸进行过程:
def swapFace(self):
_translate = QtCore.QCoreApplication.translate
if self.FACE_PICTURE_PATH == "" or self.HEAD_PICTURE_PATH == "":
self.resultLabel.setStyleSheet("border-image:url(./icons/initial2.png);")
self.resultLabel.setText(_translate("MainWindow", "<html><head/><body><p align=\"center\"><span style=\" font-size:20pt; font-weight:600;\">Please choose<br>Face first!</span></p></body></html>"))
return
try:
self.resultLabel.setText(_translate("MainWindow", "<html><head/><body><p align=\"center\"><span style=\" font-size:20pt; font-weight:600;\"></span></p></body></html>"))
head_image = cv2.imread(self.HEAD_PICTURE_PATH, cv2.IMREAD_COLOR)
face_image = cv2.imread(self.FACE_PICTURE_PATH, cv2.IMREAD_COLOR)
headshade = acquire_shade(head_image, head_landmarks)
faceshade = acquire_shade(face_image, face_landmarks)
aff_tra_matrix = acquire_aff_tra_matrix(head_landmarks, face_landmarks)
warpAffine_shade = warpAffine_face(faceshade, aff_tra_matrix, head_image)
warpAffine_image = warpAffine_face(face_image, aff_tra_matrix, head_image)
revise_edge_image = revise_edge(head_image, warpAffine_image, head_landmarks)
combined_shade = numpy.max([headshade, warpAffine_shade], 0)
result_image = head_image * (1.0 - combined_shade) + revise_edge_image * combined_shade
cv2.imwrite("./tmp/tmp_result.png", result_image)
self.resultLabel.setStyleSheet("border-image:url(./tmp/tmp_result.png);")
self.RESULT_PICTURE_PATH = "./tmp/tmp_result.png"
except MoreThanOneFaces:
_translate = QtCore.QCoreApplication.translate
self.resultLabel.setStyleSheet("border-image:url(./icons/initial2.png);")
self.resultLabel.setText(_translate("MainWindow", "<html><head/><body><p align=\"center\"><span style=\" font-size:24pt; font-weight:600;\">More than<br>One faces!</span></p></body></html>"))
self.RESULT_PICTURE_PATH = ""
except ZeroFaces:
_translate = QtCore.QCoreApplication.translate
self.resultLabel.setStyleSheet("border-image:url(./icons/initial2.png);")
self.resultLabel.setText(_translate("MainWindow", "<html><head/><body><p align=\"center\"><span style=\" font-size:30pt; font-weight:600;\">Zero<br>Face!</span></p></body></html>"))
self.RESULT_PICTURE_PATH = ""
|