I am just doing an example of feature detection in OpenCV. This example is shown below. It is giving me the following error
我只是在OpenCV中做一个特征检测的例子。这个例子如下所示。它给了我以下的错误
module' object has no attribute 'drawMatches'
模块'对象没有属性'drawMatches'
I have checked the OpenCV Docs and am not sure why I'm getting this error. Does anyone know why?
我检查了OpenCV文档,不确定为什么会出现这个错误。有人知道为什么吗?
import numpy as np
import cv2
import matplotlib.pyplot as plt
img1 = cv2.imread('box.png',0) # queryImage
img2 = cv2.imread('box_in_scene.png',0) # trainImage
# Initiate SIFT detector
orb = cv2.ORB()
# find the keypoints and descriptors with SIFT
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)
# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Match descriptors.
matches = bf.match(des1,des2)
# Draw first 10 matches.
img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:10], flags=2)
plt.imshow(img3),plt.show()
Error:
错误:
Traceback (most recent call last):
File "match.py", line 22, in <module>
img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:10], flags=2)
AttributeError: 'module' object has no attribute 'drawMatches'
3 个解决方案
#1
18
The drawMatches
Function is not part of the Python interface.
As you can see in the docs, it is only defined for C++
at the moment.
drawMatches函数不是Python接口的一部分。正如您在文档中看到的,它目前仅为c++定义。
Excerpt from the docs:
文档的摘录:
C++: void drawMatches(const Mat& img1, const vector<KeyPoint>& keypoints1, const Mat& img2, const vector<KeyPoint>& keypoints2, const vector<DMatch>& matches1to2, Mat& outImg, const Scalar& matchColor=Scalar::all(-1), const Scalar& singlePointColor=Scalar::all(-1), const vector<char>& matchesMask=vector<char>(), int flags=DrawMatchesFlags::DEFAULT )
C++: void drawMatches(const Mat& img1, const vector<KeyPoint>& keypoints1, const Mat& img2, const vector<KeyPoint>& keypoints2, const vector<vector<DMatch>>& matches1to2, Mat& outImg, const Scalar& matchColor=Scalar::all(-1), const Scalar& singlePointColor=Scalar::all(-1), const vector<vector<char>>& matchesMask=vector<vector<char> >(), int flags=DrawMatchesFlags::DEFAULT )
If the function had a Python interface, you would find something like this:
如果函数有一个Python接口,您会发现如下内容:
Python: cv2.drawMatches(img1, keypoints1, [...])
EDIT
编辑
There actually was a commit that introduced this function 5 months ago. However, it is not (yet) in the official documentation.
Make sure you are using the newest OpenCV Version (2.4.7). For sake of completeness the Functions interface for OpenCV 3.0.0 will looks like this:
5个月前有一个提交引入了这个函数。然而,在官方文件中还没有。确保您正在使用最新的OpenCV版本(2.4.7)。为了完整性起见,OpenCV 3.0.0的函数接口如下:
cv2.drawMatches(img1, keypoints1, img2, keypoints2, matches1to2[, outImg[, matchColor[, singlePointColor[, matchesMask[, flags]]]]]) → outImg
#2
77
I am late to the party as well, but I installed OpenCV 2.4.9 for Mac OS X, and the drawMatches
function doesn't exist in my distribution. I've also tried the second approach with find_obj
and that didn't work for me either. With that, I decided to write my own implementation of it that mimics drawMatches
to the best of my ability and this is what I've produced.
我也迟到了,但是我在Mac OS X上安装了OpenCV 2.4.9,并且在我的发行版中不存在drawMatches函数。我还尝试了find_obj的第二种方法,但这对我也不起作用。有了这些,我决定编写我自己的实现来模拟与我能力最匹配的drawmatch,这就是我所生成的。
I've provided my own images where one is of a camera man, and the other one is the same image but rotated by 55 degrees counterclockwise.
我提供了我自己的图像,其中一个是相机人,另一个是相同的图像,但逆时针旋转55度。
The basics of what I wrote is that I allocate an output RGB image where the amount of rows is the maximum of the two images to accommodate for placing both of the images in the output image and the columns are simply the summation of both the columns together. Be advised that I assume that both images are grayscale.
我所写的基础知识是,我分配了一个输出RGB图像,其中行数是两个图像的最大值,以容纳在输出图像中放置这两个图像,而列仅仅是两列的总和。请注意,我假设这两个图像都是灰度的。
I place each image in their corresponding spots, then run through a loop of all of the matched keypoints. I extract which keypoints matched between the two images, then extract their (x,y)
coordinates. I draw circles at each of the detected locations, then draw a line connecting these circles together.
我将每个图像放在它们对应的位置上,然后对所有匹配的关键点进行循环。我提取两个图像之间匹配的关键点,然后提取它们的(x,y)坐标。我在每个检测到的位置画一个圆圈,然后画一条线把这些圆圈连接在一起。
Bear in mind that the detected keypoint in the second image is with respect to its own coordinate system. If you want to place this in the final output image, you need to offset the column coordinate by the amount of columns from the first image so that the column coordinate is with respect to the coordinate system of the output image.
记住,在第二幅图像中被检测到的关键点与它自己的坐标系统有关。如果您想将它放在最终的输出映像中,您需要用来自第一个映像的列的数量来抵消列坐标,以便列坐标相对于输出映像的坐标系统。
Without further ado:
闲话少说:
import numpy as np
import cv2
def drawMatches(img1, kp1, img2, kp2, matches):
"""
My own implementation of cv2.drawMatches as OpenCV 2.4.9
does not have this function available but it's supported in
OpenCV 3.0.0
This function takes in two images with their associated
keypoints, as well as a list of DMatch data structure (matches)
that contains which keypoints matched in which images.
An image will be produced where a montage is shown with
the first image followed by the second image beside it.
Keypoints are delineated with circles, while lines are connected
between matching keypoints.
img1,img2 - Grayscale images
kp1,kp2 - Detected list of keypoints through any of the OpenCV keypoint
detection algorithms
matches - A list of matches of corresponding keypoints through any
OpenCV keypoint matching algorithm
"""
# Create a new output image that concatenates the two images together
# (a.k.a) a montage
rows1 = img1.shape[0]
cols1 = img1.shape[1]
rows2 = img2.shape[0]
cols2 = img2.shape[1]
# Create the output image
# The rows of the output are the largest between the two images
# and the columns are simply the sum of the two together
# The intent is to make this a colour image, so make this 3 channels
out = np.zeros((max([rows1,rows2]),cols1+cols2,3), dtype='uint8')
# Place the first image to the left
out[:rows1,:cols1] = np.dstack([img1, img1, img1])
# Place the next image to the right of it
out[:rows2,cols1:] = np.dstack([img2, img2, img2])
# For each pair of points we have between both images
# draw circles, then connect a line between them
for mat in matches:
# Get the matching keypoints for each of the images
img1_idx = mat.queryIdx
img2_idx = mat.trainIdx
# x - columns
# y - rows
(x1,y1) = kp1[img1_idx].pt
(x2,y2) = kp2[img2_idx].pt
# Draw a small circle at both co-ordinates
# radius 4
# colour blue
# thickness = 1
cv2.circle(out, (int(x1),int(y1)), 4, (255, 0, 0), 1)
cv2.circle(out, (int(x2)+cols1,int(y2)), 4, (255, 0, 0), 1)
# Draw a line in between the two points
# thickness = 1
# colour blue
cv2.line(out, (int(x1),int(y1)), (int(x2)+cols1,int(y2)), (255,0,0), 1)
# Show the image
cv2.imshow('Matched Features', out)
cv2.waitKey(0)
cv2.destroyWindow('Matched Features')
# Also return the image if you'd like a copy
return out
To illustrate that this works, here are the two images that I used:
为了说明这一点,我使用了以下两幅图片:
I used OpenCV's ORB detector to detect the keypoints, and used the normalized Hamming distance as the distance measure for similarity as this is a binary descriptor. As such:
我使用OpenCV的ORB检测器来检测关键点,使用归一化汉明距离作为相似性的距离度量,因为这是一个二进制描述符。是这样的:
import numpy as np
import cv2
img1 = cv2.imread('cameraman.png', 0) # Original image - ensure grayscale
img2 = cv2.imread('cameraman_rot55.png', 0) # Rotated image - ensure grayscale
# Create ORB detector with 1000 keypoints with a scaling pyramid factor
# of 1.2
orb = cv2.ORB(1000, 1.2)
# Detect keypoints of original image
(kp1,des1) = orb.detectAndCompute(img1, None)
# Detect keypoints of rotated image
(kp2,des2) = orb.detectAndCompute(img2, None)
# Create matcher
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Do matching
matches = bf.match(des1,des2)
# Sort the matches based on distance. Least distance
# is better
matches = sorted(matches, key=lambda val: val.distance)
# Show only the top 10 matches - also save a copy for use later
out = drawMatches(img1, kp1, img2, kp2, matches[:10])
This is the image I get:
这是我得到的图像:
To use with knnMatch
from cv2.BFMatcher
I'd like to make a note where the above code only works if you assume that the matches appear in a 1D list. However, if you decide to use the knnMatch
method from cv2.BFMatcher
for example, what is returned is a list of lists. Specifically, given the descriptors in img1
called des1
and the descriptors in img2
called des2
, each element in the list returned from knnMatch
is another list of k
matches from des2
which are the closest to each descriptor in des1
. Therefore, the first element from the output of knnMatch
is a list of k
matches from des2
which were the closest to the first descriptor found in des1
. The second element from the output of knnMatch
is a list of k
matches from des2
which were the closest to the second descriptor found in des1
and so on.
我想说明一下,如果您假设匹配项出现在1D列表中,那么上面的代码就只能工作。但是,如果您决定使用来自cv2的knnMatch方法。例如,返回的是列表的列表。具体来说,给定img1中称为des1的描述符和img2中称为des2的描述符,knnMatch返回的列表中的每个元素都是来自des2的k匹配的另一个列表,它与des1中每个描述符最接近。因此,knnMatch输出的第一个元素是来自des2的k个匹配项的列表,它们与des1中找到的第一个描述符最接近。knnMatch输出的第二个元素是来自des2的k个匹配的列表,该列表与des1中找到的第二个描述符最接近。
To make the most sense of knnMatch
, you must limit the total amount of neighbours to match to k=2
. The reason why is because you want to use at least two matched points to verify the quality of the match and if the quality is good enough, you'll want to use these to draw your matches and show them on the screen. You can use a very simple ratio test (credit goes to David Lowe) to ensure that the distance from the first matched point from des2
to the descriptor in des1
is some distance away in comparison to the second matched point from des2
. Therefore, to turn what is returned from knnMatch
to what is required with the code I wrote above, iterate through the matches, use the above ratio test and check if it passes. If it does, add the first matched keypoint to a new list.
为了使knnMatch的意义更大,您必须限制与k=2匹配的所有邻居的总数。原因是您希望使用至少两个匹配点来验证匹配的质量,如果质量足够好,您将希望使用这些点来绘制您的匹配并在屏幕上显示它们。您可以使用一个非常简单的比率测试(credit goes to David Lowe),以确保des1中的第一个匹配点与des1中的描述符之间的距离与des2中的第二个匹配点相比有一定的距离。因此,要将从knnMatch返回的内容转换为上面编写的代码所需要的内容,对匹配进行迭代,使用上面的比率测试并检查它是否通过。如果是,将第一个匹配的关键点添加到新列表中。
Assuming that you created all of the variables like you did before declaring the BFMatcher
instance, you'd now do this to adapt the knnMatch
method for using drawMatches
:
假设您创建了所有的变量,就像在声明BFMatcher实例之前所做的那样,那么您现在应该这样做,以适应使用drawMatches的knnMatch方法:
# Create matcher
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Perform KNN matching
matches = bf.knnMatch(des1, des2, k=2)
# Apply ratio test
good = []
for m,n in matches:
if m.distance < 0.75*n.distance:
# Add first matched keypoint to list
# if ratio test passes
good.append(m)
# Or do a list comprehension
#good = [m for (m,n) in matches if m.distance < 0.75*n.distance]
# Now perform drawMatches
out = drawMatches(img1, kp1, img2, kp2, good)
I want to attribute the above modifications to user @ryanmeasel and the answer that these modifications were found is in his post: OpenCV Python : No drawMatchesknn function.
我想把上面的修改归为用户@ryanmeasel,这些修改的答案在他的文章中:OpenCV Python: No drawMatchesknn函数中。
#3
16
I know this question has an accepted answer that is correct, but if you are using OpenCV 2.4.8 and not 3.0(-dev), a workaround could be to use some functions from the included samples found in opencv\sources\samples\python2\find_obj
我知道这个问题有一个公认的正确答案,但是如果您使用的是OpenCV 2.4.8而不是3.0(-dev),一个解决方案可能是使用OpenCV \sources\ python2\find_obj中包含的示例中的一些函数
import cv2
from find_obj import filter_matches,explore_match
img1 = cv2.imread('../c/box.png',0) # queryImage
img2 = cv2.imread('../c/box_in_scene.png',0) # trainImage
# Initiate SIFT detector
orb = cv2.ORB()
# find the keypoints and descriptors with SIFT
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)
# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING)#, crossCheck=True)
matches = bf.knnMatch(des1, trainDescriptors = des2, k = 2)
p1, p2, kp_pairs = filter_matches(kp1, kp2, matches)
explore_match('find_obj', img1,img2,kp_pairs)#cv2 shows image
cv2.waitKey()
cv2.destroyAllWindows()
This is the output image:
这是输出图像:
#1
18
The drawMatches
Function is not part of the Python interface.
As you can see in the docs, it is only defined for C++
at the moment.
drawMatches函数不是Python接口的一部分。正如您在文档中看到的,它目前仅为c++定义。
Excerpt from the docs:
文档的摘录:
C++: void drawMatches(const Mat& img1, const vector<KeyPoint>& keypoints1, const Mat& img2, const vector<KeyPoint>& keypoints2, const vector<DMatch>& matches1to2, Mat& outImg, const Scalar& matchColor=Scalar::all(-1), const Scalar& singlePointColor=Scalar::all(-1), const vector<char>& matchesMask=vector<char>(), int flags=DrawMatchesFlags::DEFAULT )
C++: void drawMatches(const Mat& img1, const vector<KeyPoint>& keypoints1, const Mat& img2, const vector<KeyPoint>& keypoints2, const vector<vector<DMatch>>& matches1to2, Mat& outImg, const Scalar& matchColor=Scalar::all(-1), const Scalar& singlePointColor=Scalar::all(-1), const vector<vector<char>>& matchesMask=vector<vector<char> >(), int flags=DrawMatchesFlags::DEFAULT )
If the function had a Python interface, you would find something like this:
如果函数有一个Python接口,您会发现如下内容:
Python: cv2.drawMatches(img1, keypoints1, [...])
EDIT
编辑
There actually was a commit that introduced this function 5 months ago. However, it is not (yet) in the official documentation.
Make sure you are using the newest OpenCV Version (2.4.7). For sake of completeness the Functions interface for OpenCV 3.0.0 will looks like this:
5个月前有一个提交引入了这个函数。然而,在官方文件中还没有。确保您正在使用最新的OpenCV版本(2.4.7)。为了完整性起见,OpenCV 3.0.0的函数接口如下:
cv2.drawMatches(img1, keypoints1, img2, keypoints2, matches1to2[, outImg[, matchColor[, singlePointColor[, matchesMask[, flags]]]]]) → outImg
#2
77
I am late to the party as well, but I installed OpenCV 2.4.9 for Mac OS X, and the drawMatches
function doesn't exist in my distribution. I've also tried the second approach with find_obj
and that didn't work for me either. With that, I decided to write my own implementation of it that mimics drawMatches
to the best of my ability and this is what I've produced.
我也迟到了,但是我在Mac OS X上安装了OpenCV 2.4.9,并且在我的发行版中不存在drawMatches函数。我还尝试了find_obj的第二种方法,但这对我也不起作用。有了这些,我决定编写我自己的实现来模拟与我能力最匹配的drawmatch,这就是我所生成的。
I've provided my own images where one is of a camera man, and the other one is the same image but rotated by 55 degrees counterclockwise.
我提供了我自己的图像,其中一个是相机人,另一个是相同的图像,但逆时针旋转55度。
The basics of what I wrote is that I allocate an output RGB image where the amount of rows is the maximum of the two images to accommodate for placing both of the images in the output image and the columns are simply the summation of both the columns together. Be advised that I assume that both images are grayscale.
我所写的基础知识是,我分配了一个输出RGB图像,其中行数是两个图像的最大值,以容纳在输出图像中放置这两个图像,而列仅仅是两列的总和。请注意,我假设这两个图像都是灰度的。
I place each image in their corresponding spots, then run through a loop of all of the matched keypoints. I extract which keypoints matched between the two images, then extract their (x,y)
coordinates. I draw circles at each of the detected locations, then draw a line connecting these circles together.
我将每个图像放在它们对应的位置上,然后对所有匹配的关键点进行循环。我提取两个图像之间匹配的关键点,然后提取它们的(x,y)坐标。我在每个检测到的位置画一个圆圈,然后画一条线把这些圆圈连接在一起。
Bear in mind that the detected keypoint in the second image is with respect to its own coordinate system. If you want to place this in the final output image, you need to offset the column coordinate by the amount of columns from the first image so that the column coordinate is with respect to the coordinate system of the output image.
记住,在第二幅图像中被检测到的关键点与它自己的坐标系统有关。如果您想将它放在最终的输出映像中,您需要用来自第一个映像的列的数量来抵消列坐标,以便列坐标相对于输出映像的坐标系统。
Without further ado:
闲话少说:
import numpy as np
import cv2
def drawMatches(img1, kp1, img2, kp2, matches):
"""
My own implementation of cv2.drawMatches as OpenCV 2.4.9
does not have this function available but it's supported in
OpenCV 3.0.0
This function takes in two images with their associated
keypoints, as well as a list of DMatch data structure (matches)
that contains which keypoints matched in which images.
An image will be produced where a montage is shown with
the first image followed by the second image beside it.
Keypoints are delineated with circles, while lines are connected
between matching keypoints.
img1,img2 - Grayscale images
kp1,kp2 - Detected list of keypoints through any of the OpenCV keypoint
detection algorithms
matches - A list of matches of corresponding keypoints through any
OpenCV keypoint matching algorithm
"""
# Create a new output image that concatenates the two images together
# (a.k.a) a montage
rows1 = img1.shape[0]
cols1 = img1.shape[1]
rows2 = img2.shape[0]
cols2 = img2.shape[1]
# Create the output image
# The rows of the output are the largest between the two images
# and the columns are simply the sum of the two together
# The intent is to make this a colour image, so make this 3 channels
out = np.zeros((max([rows1,rows2]),cols1+cols2,3), dtype='uint8')
# Place the first image to the left
out[:rows1,:cols1] = np.dstack([img1, img1, img1])
# Place the next image to the right of it
out[:rows2,cols1:] = np.dstack([img2, img2, img2])
# For each pair of points we have between both images
# draw circles, then connect a line between them
for mat in matches:
# Get the matching keypoints for each of the images
img1_idx = mat.queryIdx
img2_idx = mat.trainIdx
# x - columns
# y - rows
(x1,y1) = kp1[img1_idx].pt
(x2,y2) = kp2[img2_idx].pt
# Draw a small circle at both co-ordinates
# radius 4
# colour blue
# thickness = 1
cv2.circle(out, (int(x1),int(y1)), 4, (255, 0, 0), 1)
cv2.circle(out, (int(x2)+cols1,int(y2)), 4, (255, 0, 0), 1)
# Draw a line in between the two points
# thickness = 1
# colour blue
cv2.line(out, (int(x1),int(y1)), (int(x2)+cols1,int(y2)), (255,0,0), 1)
# Show the image
cv2.imshow('Matched Features', out)
cv2.waitKey(0)
cv2.destroyWindow('Matched Features')
# Also return the image if you'd like a copy
return out
To illustrate that this works, here are the two images that I used:
为了说明这一点,我使用了以下两幅图片:
I used OpenCV's ORB detector to detect the keypoints, and used the normalized Hamming distance as the distance measure for similarity as this is a binary descriptor. As such:
我使用OpenCV的ORB检测器来检测关键点,使用归一化汉明距离作为相似性的距离度量,因为这是一个二进制描述符。是这样的:
import numpy as np
import cv2
img1 = cv2.imread('cameraman.png', 0) # Original image - ensure grayscale
img2 = cv2.imread('cameraman_rot55.png', 0) # Rotated image - ensure grayscale
# Create ORB detector with 1000 keypoints with a scaling pyramid factor
# of 1.2
orb = cv2.ORB(1000, 1.2)
# Detect keypoints of original image
(kp1,des1) = orb.detectAndCompute(img1, None)
# Detect keypoints of rotated image
(kp2,des2) = orb.detectAndCompute(img2, None)
# Create matcher
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Do matching
matches = bf.match(des1,des2)
# Sort the matches based on distance. Least distance
# is better
matches = sorted(matches, key=lambda val: val.distance)
# Show only the top 10 matches - also save a copy for use later
out = drawMatches(img1, kp1, img2, kp2, matches[:10])
This is the image I get:
这是我得到的图像:
To use with knnMatch
from cv2.BFMatcher
I'd like to make a note where the above code only works if you assume that the matches appear in a 1D list. However, if you decide to use the knnMatch
method from cv2.BFMatcher
for example, what is returned is a list of lists. Specifically, given the descriptors in img1
called des1
and the descriptors in img2
called des2
, each element in the list returned from knnMatch
is another list of k
matches from des2
which are the closest to each descriptor in des1
. Therefore, the first element from the output of knnMatch
is a list of k
matches from des2
which were the closest to the first descriptor found in des1
. The second element from the output of knnMatch
is a list of k
matches from des2
which were the closest to the second descriptor found in des1
and so on.
我想说明一下,如果您假设匹配项出现在1D列表中,那么上面的代码就只能工作。但是,如果您决定使用来自cv2的knnMatch方法。例如,返回的是列表的列表。具体来说,给定img1中称为des1的描述符和img2中称为des2的描述符,knnMatch返回的列表中的每个元素都是来自des2的k匹配的另一个列表,它与des1中每个描述符最接近。因此,knnMatch输出的第一个元素是来自des2的k个匹配项的列表,它们与des1中找到的第一个描述符最接近。knnMatch输出的第二个元素是来自des2的k个匹配的列表,该列表与des1中找到的第二个描述符最接近。
To make the most sense of knnMatch
, you must limit the total amount of neighbours to match to k=2
. The reason why is because you want to use at least two matched points to verify the quality of the match and if the quality is good enough, you'll want to use these to draw your matches and show them on the screen. You can use a very simple ratio test (credit goes to David Lowe) to ensure that the distance from the first matched point from des2
to the descriptor in des1
is some distance away in comparison to the second matched point from des2
. Therefore, to turn what is returned from knnMatch
to what is required with the code I wrote above, iterate through the matches, use the above ratio test and check if it passes. If it does, add the first matched keypoint to a new list.
为了使knnMatch的意义更大,您必须限制与k=2匹配的所有邻居的总数。原因是您希望使用至少两个匹配点来验证匹配的质量,如果质量足够好,您将希望使用这些点来绘制您的匹配并在屏幕上显示它们。您可以使用一个非常简单的比率测试(credit goes to David Lowe),以确保des1中的第一个匹配点与des1中的描述符之间的距离与des2中的第二个匹配点相比有一定的距离。因此,要将从knnMatch返回的内容转换为上面编写的代码所需要的内容,对匹配进行迭代,使用上面的比率测试并检查它是否通过。如果是,将第一个匹配的关键点添加到新列表中。
Assuming that you created all of the variables like you did before declaring the BFMatcher
instance, you'd now do this to adapt the knnMatch
method for using drawMatches
:
假设您创建了所有的变量,就像在声明BFMatcher实例之前所做的那样,那么您现在应该这样做,以适应使用drawMatches的knnMatch方法:
# Create matcher
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Perform KNN matching
matches = bf.knnMatch(des1, des2, k=2)
# Apply ratio test
good = []
for m,n in matches:
if m.distance < 0.75*n.distance:
# Add first matched keypoint to list
# if ratio test passes
good.append(m)
# Or do a list comprehension
#good = [m for (m,n) in matches if m.distance < 0.75*n.distance]
# Now perform drawMatches
out = drawMatches(img1, kp1, img2, kp2, good)
I want to attribute the above modifications to user @ryanmeasel and the answer that these modifications were found is in his post: OpenCV Python : No drawMatchesknn function.
我想把上面的修改归为用户@ryanmeasel,这些修改的答案在他的文章中:OpenCV Python: No drawMatchesknn函数中。
#3
16
I know this question has an accepted answer that is correct, but if you are using OpenCV 2.4.8 and not 3.0(-dev), a workaround could be to use some functions from the included samples found in opencv\sources\samples\python2\find_obj
我知道这个问题有一个公认的正确答案,但是如果您使用的是OpenCV 2.4.8而不是3.0(-dev),一个解决方案可能是使用OpenCV \sources\ python2\find_obj中包含的示例中的一些函数
import cv2
from find_obj import filter_matches,explore_match
img1 = cv2.imread('../c/box.png',0) # queryImage
img2 = cv2.imread('../c/box_in_scene.png',0) # trainImage
# Initiate SIFT detector
orb = cv2.ORB()
# find the keypoints and descriptors with SIFT
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)
# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING)#, crossCheck=True)
matches = bf.knnMatch(des1, trainDescriptors = des2, k = 2)
p1, p2, kp_pairs = filter_matches(kp1, kp2, matches)
explore_match('find_obj', img1,img2,kp_pairs)#cv2 shows image
cv2.waitKey()
cv2.destroyAllWindows()
This is the output image:
这是输出图像: