如何将numpy数组传递给openCV而不首先将文件保存为png或jpeg ?

时间:2021-11-13 02:06:52

I am trying to take a screenshot, then convert it to a numpy array. I then want to run cv2.matchTemplate using the screenshot. So far the only way I have gotten this to work is to save the image: cv2.imwrite('temp.png',imcv) and then use that image in cv2.matchTemplate. This seems horribly wrong. How can I convert the numpy array properly to avoid saving and just pass it straight to the cv2.matchTemplate function?

我正在尝试截图,然后将它转换成一个numpy数组。然后运行cv2。matchTemplate使用截图。到目前为止,我唯一的方法是保存图像:cv2.imwrite('temp.png',imcv),然后在cv2.matchTemplate中使用该图像。这似乎是极其错误的。如何正确地转换numpy数组以避免保存并直接传递到cv2。matchTemplate函数?

I am doing this project in Ubuntu btw.

顺便说一句,我正在Ubuntu上做这个项目。

import pyscreenshot as ImageGrab
import PIL
import cv2
import numpy as np
from matplotlib import pyplot as plt

# part of the screen
im=ImageGrab.grab(bbox=(65,50,835,725)) # X1,Y1,X2,Y2

#convert to numpy array
im=im.convert('RGB')
imcv = np.array(im) 
imcv = imcv[:, :, ::-1].copy() 
cv2.imwrite('temp.png',imcv)

img = cv2.imread('temp.png',0)
template = cv2.imread('fight.png',0)
w, h = template.shape[::-1]

# Apply template Matching
res = cv2.matchTemplate(img,template,cv2.TM_CCOEFF)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
print(min_val)
print(max_val)
print(min_loc)
print(max_loc)
if(max_loc == (484,125)):
    print("True!")

top_left = max_loc
bottom_right = (top_left[0] + w, top_left[1] + h)

cv2.rectangle(img,top_left, bottom_right, 255, 2)

plt.subplot(121),plt.imshow(res,cmap = 'gray')
plt.title('Matching Result'), plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(img,cmap = 'gray')
plt.title('Detected Point'), plt.xticks([]), plt.yticks([])
plt.suptitle(cv2.TM_CCOEFF)

plt.show()

This is the simplest I can get it down too: I will also post error message after code.

这也是我能写下来的最简单的东西:我还将在代码之后发布错误消息。

import pyscreenshot as ImageGrab
import PIL
import cv2
import numpy

im=ImageGrab.grab(bbox=(65,50,835,725)) # X1,Y1,X2,Y2
print type(im)
im=im.convert('RGB')
print type(im)
im = numpy.array(im)
print type(im)
im = im[:, :, ::-1].copy() 
print type(im)
cv2.cv.fromarray(im)
print type(im) 

template = cv2.imread('fight.png',0)

templateTest = cv2.matchTemplate(im,template,cv2.TM_CCOEFF)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
print(min_val)
print(max_val)
print(min_loc)
print(max_loc)

<type 'instance'>
<type 'instance'>
<type 'numpy.ndarray'>
<type 'numpy.ndarray'>
<type 'numpy.ndarray'>
OpenCV Error: Assertion failed ((img.depth() == CV_8U || img.depth() == CV_32F) && img.type() == templ.type()) in matchTemplate, file /home/kninja/Downloads/opencv-2.4.9/modules/imgproc/src/templmatch.cpp, line 249
Traceback (most recent call last):
  File "StartOVer.py", line 32, in <module>
    res = cv2.matchTemplate(im,template,cv2.TM_CCOEFF)
cv2.error: /home/kninja/Downloads/opencv-2.4.9/modules/imgproc/src/templmatch.cpp:249: error: (-215) (img.depth() == CV_8U || img.depth() == CV_32F) && img.type() == templ.type() in function matchTemplate

4 个解决方案

#1


1  

import pyscreenshot as ImageGrab
import PIL
import cv2
import numpy

im=ImageGrab.grab(bbox=(65,50,835,725)) # X1,Y1,X2,Y2
print type(im)
im=im.convert('RGB')
print type(im)
im = numpy.array(im)
print type(im) 

cv_img = im.astype(np.uint8)
cv_gray = cv2.cvtColor(cv_img, cv2.COLOR_RGB2GRAY)

template = cv2.imread("filename.png", cv2.IMREAD_GRAYSCALE)

#2


2  

PIL images support the array interface, so you can use fromarray. Try this:

PIL镜像支持数组接口,所以可以使用frompil数组。试试这个:

cv2.cv.fromarray(imcv)

cv2.cv.fromarray(imcv)

#3


1  

I am working on a similar project. I have been using the pyautogui library for automation, but was discontent with the image matching functions provided by the library, due to the long run time and inflexibility of exact image matching, so I switched to opencv for template matching. I found this post seeking to produce a grey numpy array screenshot in the fastest possible manner. Froyo's answer does not write to the hard drive, but I have found what I have in place is faster anyways. I am also running on Ubuntu, and I believe the screenshot taken by pyautogui is called on the backend using the popular linux tool scrot. The following code snippet is modified from sample code provided in opencv's documentation: http://docs.opencv.org/3.1.0/d4/dc6/tutorial_py_template_matching.html

我正在做一个类似的项目。我一直在使用pyautogui库进行自动化,但是由于库提供的图像匹配功能运行时间长,精确的图像匹配缺乏灵活性,所以我选择opencv进行模板匹配。我发现这篇文章试图以最快的方式生成一个灰色的numpy数组屏幕截图。Froyo的答案并没有写到硬盘上,但我发现我所拥有的无论如何都更快。我也在Ubuntu上运行,我相信pyautogui拍摄的截图后端会使用流行的linux工具scrot进行调用。下面的代码片段是从opencv文档中提供的示例代码中修改的:http://docs.opencv.org/3.1./d4/dc6/tutorial_py_template_matching.html

#!/usr/bin/env python
import cv2
import numpy as np
import pyautogui
import PIL
from time import time, sleep
import pyscreenshot as ImageGrab

def click_image(template_filename):
    start = time()
    '''
    im=ImageGrab.grab()
    im=im.convert('RGB')
    im = np.array(im)
    cv_img = im.astype(np.uint8)
    screen = cv2.cvtColor(cv_img, cv2.COLOR_RGB2GRAY)
    '''
    pyautogui.screenshot('current_screen.png')
    screen = cv2.imread('current_screen.png',cv2.IMREAD_GRAYSCALE)
    template = cv2.imread(template_filename,cv2.IMREAD_GRAYSCALE)
    if template is None:
        print("failed to load template.")
        quit()
    w, h = template.shape[::-1]
    method = 'cv2.TM_CCOEFF'
    meth = eval(method)
    # Apply template Matching
    res = cv2.matchTemplate(screen,template,meth)
    #get min/max values to match
    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
    top_left = max_loc
    bottom_right = (top_left[0] + w, top_left[1] + h)
    center =((top_left[0]+bottom_right[0])/2(top_left[1]+bottom_right[1])/2)
    print(center)
    pyautogui.moveTo(center)
    pyautogui.click(center,button="right")
    end = time()
    print("clicked in "+str(int(1000*(end-start)))+"ms")
click_image("files.png")

#4


0  

Use following code to convert:

使用以下代码进行转换:

import cv2
import PIL
import numpy as np

img = ImageGrab.grab(bbox=(x1, y1, x2, y2))
img = cv2.cvtColor(np.array(img), cv2.COLOR_BGR2RGB)

#1


1  

import pyscreenshot as ImageGrab
import PIL
import cv2
import numpy

im=ImageGrab.grab(bbox=(65,50,835,725)) # X1,Y1,X2,Y2
print type(im)
im=im.convert('RGB')
print type(im)
im = numpy.array(im)
print type(im) 

cv_img = im.astype(np.uint8)
cv_gray = cv2.cvtColor(cv_img, cv2.COLOR_RGB2GRAY)

template = cv2.imread("filename.png", cv2.IMREAD_GRAYSCALE)

#2


2  

PIL images support the array interface, so you can use fromarray. Try this:

PIL镜像支持数组接口,所以可以使用frompil数组。试试这个:

cv2.cv.fromarray(imcv)

cv2.cv.fromarray(imcv)

#3


1  

I am working on a similar project. I have been using the pyautogui library for automation, but was discontent with the image matching functions provided by the library, due to the long run time and inflexibility of exact image matching, so I switched to opencv for template matching. I found this post seeking to produce a grey numpy array screenshot in the fastest possible manner. Froyo's answer does not write to the hard drive, but I have found what I have in place is faster anyways. I am also running on Ubuntu, and I believe the screenshot taken by pyautogui is called on the backend using the popular linux tool scrot. The following code snippet is modified from sample code provided in opencv's documentation: http://docs.opencv.org/3.1.0/d4/dc6/tutorial_py_template_matching.html

我正在做一个类似的项目。我一直在使用pyautogui库进行自动化,但是由于库提供的图像匹配功能运行时间长,精确的图像匹配缺乏灵活性,所以我选择opencv进行模板匹配。我发现这篇文章试图以最快的方式生成一个灰色的numpy数组屏幕截图。Froyo的答案并没有写到硬盘上,但我发现我所拥有的无论如何都更快。我也在Ubuntu上运行,我相信pyautogui拍摄的截图后端会使用流行的linux工具scrot进行调用。下面的代码片段是从opencv文档中提供的示例代码中修改的:http://docs.opencv.org/3.1./d4/dc6/tutorial_py_template_matching.html

#!/usr/bin/env python
import cv2
import numpy as np
import pyautogui
import PIL
from time import time, sleep
import pyscreenshot as ImageGrab

def click_image(template_filename):
    start = time()
    '''
    im=ImageGrab.grab()
    im=im.convert('RGB')
    im = np.array(im)
    cv_img = im.astype(np.uint8)
    screen = cv2.cvtColor(cv_img, cv2.COLOR_RGB2GRAY)
    '''
    pyautogui.screenshot('current_screen.png')
    screen = cv2.imread('current_screen.png',cv2.IMREAD_GRAYSCALE)
    template = cv2.imread(template_filename,cv2.IMREAD_GRAYSCALE)
    if template is None:
        print("failed to load template.")
        quit()
    w, h = template.shape[::-1]
    method = 'cv2.TM_CCOEFF'
    meth = eval(method)
    # Apply template Matching
    res = cv2.matchTemplate(screen,template,meth)
    #get min/max values to match
    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
    top_left = max_loc
    bottom_right = (top_left[0] + w, top_left[1] + h)
    center =((top_left[0]+bottom_right[0])/2(top_left[1]+bottom_right[1])/2)
    print(center)
    pyautogui.moveTo(center)
    pyautogui.click(center,button="right")
    end = time()
    print("clicked in "+str(int(1000*(end-start)))+"ms")
click_image("files.png")

#4


0  

Use following code to convert:

使用以下代码进行转换:

import cv2
import PIL
import numpy as np

img = ImageGrab.grab(bbox=(x1, y1, x2, y2))
img = cv2.cvtColor(np.array(img), cv2.COLOR_BGR2RGB)