基于 BRIEF、SIFT 和 ORB 匹配图像的应用
本节我们将为图像引入更多的特征描述符,即 BRIEF(短二进制特征描述符)和 ORB(BRIEF 的改进版,SIFT的有效替代品)。很快就可以知道,这些描述符都可以用于图像匹配和目标检测。
1.基于 scikit-image 与 BRIEF 二进制描述符匹配图像
BRIEF 描述符的比特数相对较少,可以使用一组强度差测试来进行计算。作为短二进制特征描述符,它具有较低的内存占用,利用该描述符使用汉明(Hamming)距离度量进行匹配是非常有效的。BRIEF 虽然不提供旋转不变性,但可以通过检测不同尺度的特征来获得所需的尺度不变性。如下代码展示了如何使用 scikit-image 函数计算 BRIEF 二进制描述符,其中用于匹配的输入图像是灰度 Lena 图像及其仿射变换后的版本。
import matplotlib.pyplot as plt
import cv2
from skimage import transform as transform
from skimage.feature import (match_descriptors, corner_peaks,
corner_harris, plot_matches, BRIEF)
img1 = rgb2gray(data.astronaut())
affine_trans = transform.AffineTransform(scale=(1.2, 1.2), translation=(0,-100))
img2 = transform.warp(img1, affine_trans)
img3 = transform.rotate(img1, 25)
coords1, coords2, coords3 = corner_harris(img1), corner_harris(img2),corner_harris(img3)
coords1[coords1 > 0.01*coords1.max()] = 1
coords2[coords2 > 0.01*coords2.max()] = 1
coords3[coords3 > 0.01*coords3.max()] = 1
keypoints1 = corner_peaks(coords1, min_distance=5)
keypoints2 = corner_peaks(coords2, min_distance=5)
keypoints3 = corner_peaks(coords3, min_distance=5)
extractor = BRIEF()
extractor.extract(img1, keypoints1)
keypoints1, descriptors1 = keypoints1[extractor.mask], extractor.descriptors
extractor.extract(img2, keypoints2)
keypoints2, descriptors2 = keypoints2[extractor.mask], extractor.descriptors
extractor.extract(img3, keypoints3)
keypoints3, descriptors3 = keypoints3[extractor.mask], extractor.descriptors
matches12 = match_descriptors(descriptors1, descriptors2, cross_check=True)
matches13 = match_descriptors(descriptors1, descriptors3, cross_check=True)
fig, axes = pylab.subplots(nrows=2, ncols=1, figsize=(20,20))
pylab.gray(), plot_matches(axes[0], img1, img2, keypoints1, keypoints2,matches12)
axes[0].axis('off'), axes[0].set_title("Original Image vs. Transformed Image")
plot_matches(axes[1], img1, img3, keypoints1, keypoints3, matches13)
axes[1].axis('off'), axes[1].set_title("Original Image vs. Transformed Image"),
pylab.show()
运行上述代码,输出结果如图 所示。从图中可以看到两个图像之间的 BRIEF 关键点是如何匹配的。
2.基于 scikit-image 与 ORB 特征检测器和二进制特征描述符匹配
ORB 特征检测和二进制描述符算法采用了定向的 FAST 检测方法和旋转的 BRIEF 描述符。与 BRIEF 相比,ORB 具有更大的尺度和旋转不变性,但即便如此,也同样采用汉明距离度量进行匹配,这样效率更高。因此,在考虑实时应用场合时,该方法优于 BRIEF。代码如下所示。
import matplotlib.pyplot as plt
import cv2
from skimage import transform as transform
from skimage.feature import match_descriptors, corner_peaks, corner_harris, plot_matches, BRIEF
from skimage import transform as transform
from skimage.feature import match_descriptors, ORB, plot_matches
img1 = rgb2gray(imread("C:/Users/zhuyupeng/Desktop/1.jpg"))
img2 = transform.rotate(img1, 180)
affine_trans = transform.AffineTransform(scale=(1.3, 1.1), rotation=0.5,translation= (0, -200))
img3 = transform.warp(img1, affine_trans)
img4 = transform.resize(rgb2gray(imread("C:/Users/zhuyupeng/Desktop/2.jpg")), img1.shape, anti_aliasing=True)
descriptor_extractor = ORB(n_keypoints=200)
descriptor_extractor.detect_and_extract(img1)
keypoints1, descriptors1 = descriptor_extractor.keypoints, descriptor_extractor.descriptors
descriptor_extractor.detect_and_extract(img2)
keypoints2, descriptors2 = descriptor_extractor.keypoints, descriptor_extractor.descriptors
descriptor_extractor.detect_and_extract(img3)
keypoints3, descriptors3 = descriptor_extractor.keypoints, descriptor_extractor.descriptors
descriptor_extractor.detect_and_extract(img4)
keypoints4, descriptors4 = descriptor_extractor.keypoints, descriptor_extractor.descriptors
matches12 = match_descriptors(descriptors1, descriptors2, cross_check=True)
matches13 = match_descriptors(descriptors1, descriptors3, cross_check=True)
matches14 = match_descriptors(descriptors1, descriptors4, cross_check=True)
fig, axes = pylab.subplots(nrows=3, ncols=1, figsize=(20,25))
pylab.gray()
plot_matches(axes[0], img1, img2, keypoints1, keypoints2, matches12)
axes[0].axis('off'), axes[0].set_title("Original Image vs. Transformed Image", size=20)
plot_matches(axes[1], img1, img3, keypoints1, keypoints3, matches13)
axes[1].axis('off'), axes[1].set_title("Original Image vs. Transformed Image", size=20)
plot_matches(axes[2], img1, img4, keypoints1, keypoints4, matches14)
axes[2].axis('off'), axes[2].set_title("Image1 vs. Image2", size=20)
pylab.show()在这里插入代码片
3.基于 python-opencv 使用暴力匹配与 ORB 特征匹配
在本节中,我们将演示如何使用 opencv 的暴力匹配器匹配两个图像描述符。在此方法中,一幅图像的特征描述符与另一幅图像中的所有特征匹配(使用一些距离度量),并返回最近的一个。使用带有 ORB 描述符的 BFMatcher()函数来匹配两幅图书图像,如下面的代码所示:
img1 = cv2.imread("C:/Users/zhuyupeng/Desktop/1.jpg",0)
img2 = cv2.imread("C:/Users/zhuyupeng/Desktop/2.jpg",0)
orb = cv2.ORB_create()
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
matches = bf.match(des1, des2)
matches = sorted(matches, key = lambda x:x.distance)
img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:20], None, flags=2)
pylab.figure(figsize=(20,10)), pylab.imshow(img3), pylab.show()
!3b5ce4a547c8b724d879b4b930e4.jpeg#pic_center)
4.基于 尺度不变特征变换(Scale-Invariant Feature Transform,SIFT) 描述符进行暴力匹配以及基于 OpenCV 进行比率检验
两个图像之间的 SIFT 关键点通过识别它们最近的邻居来进行匹配。但在某些情况下,由于噪声等因素,第二个最接近的匹配似乎更接近第一个匹配。在这种情况下,计算最近距离与第二最近距离的比率,并检验它是否大于 0.8。如果比率大于 0.8,则表示拒绝。这有效地消除了约 90%的错误匹配,且只有约 5%的正确匹配(源于 SIFT 文献)。使用 knnMatch()函数获得 k = 2 个最匹配的关键点,我们也应用了比率检验,如下面的代码所示:
img1 = cv2.imread('../images/books.png',0)
img2 = cv2.imread('../images/book.png',0)
sift = cv2.xfeatures2d.SIFT_create()
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
bf = cv2.BFMatcher()
matches = bf.knnMatch(des1, des2, k=2)
good_matches = []
for m1, m2 in matches:
if m1.distance < 0.75*m2.distance:
good_matches.append([m1])
img3 = cv2.drawMatchesKnn(img1, kp1, img2, kp2,good_matches, None, flags=2)
pylab.imshow(img3),pylab.show()
|