I have an app which uses Tesseract for OCR. Till now I used to have a manual cropping option of image and then pass the cropped image taken from camera to Tesseract. Now in iOS8 there is CIDetector,using that I am detecting rectangle and passing this to tesseract .
我有一个使用Tesseract进行OCR的应用程序。到目前为止,我曾经有一个手动裁剪图像选项,然后将从相机拍摄的裁剪图像传递给Tesseract。现在在iOS8中有CIDetector,使用它我正在检测矩形并将其传递给tesseract。
****Problem ***
The problem here is when I pass this cropped image to tesseract its not reading image properly.
这里的问题是当我通过这个裁剪的图像来测试它没有正确读取图像。
I know the reason for inacurracy with tesseract is the resolution/scale of the cropped image .
我知道tesseract不准确的原因是裁剪图像的分辨率/比例。
There are couple of things I am unclear about:-
有几件我不清楚的事情: -
-
The cropped image is CIImage and I converted it to UIImage,when I see the size of that image its very low(320*468) which was not the case im my prev implemetation,camera image used to be more than 3000*2000 in size . Is this lossing its scale while conversion from CIImage to UIImage ?
裁剪后的图像是CIImage,我将其转换为UIImage,当我看到该图像的大小非常低(320 * 468)时,我的上一个实现并非如此,相机图像的大小超过3000 * 2000 。从CIImage到UIImage的转换是否会损失其规模?
-
Or is the problem because I am picking the image differently and not taking a picture with camera ?
或者是问题,因为我选择不同的图像而不是用相机拍照?
I have followed this link for live detection :- Link
我已按照此链接进行实时检测: - 链接
1 个解决方案
#1
0
The detector mentioned in the article does not return a rectangle it returns 4 points which you need to run through a CIFilter "CIPerspectiveCorrection" the output of the CIFilter can then be used by tesseract
文章中提到的探测器没有返回一个矩形,它返回4个点,你需要通过CIFilter“CIPerspectiveCorrection”运行,然后CIFilter的输出可以被tesseract使用
#1
0
The detector mentioned in the article does not return a rectangle it returns 4 points which you need to run through a CIFilter "CIPerspectiveCorrection" the output of the CIFilter can then be used by tesseract
文章中提到的探测器没有返回一个矩形,它返回4个点,你需要通过CIFilter“CIPerspectiveCorrection”运行,然后CIFilter的输出可以被tesseract使用