Imagine this: I have an PNG image that shows teddy bear, or a triangle. Something non-quadratic. Now, I want to find out, at which point for a given row and direction (coordinates relativ to the UIImageView's coordinate system) the actual visible image starts.
想象一下:我有一张显示泰迪熊或三角形的PNG图像。非二次的东西。现在,我想找出一个给定的行和方向(相对于UIImageView坐标系的坐标)的实际可见图像的起点。
Example: Lets say, I need to know where the feet of the teddy bear begins from the left, when looking at the last line. Sure that's not just frame.origin.x, because the feet is not quadratic. It may begin somewhere at x=12.
示例:让我们说,当我看到最后一行时,我需要知道泰迪熊的脚从左边开始。当然,这不仅仅是frame.origin.x,因为脚不是二次的。它可能从x = 12处开始。
I would iterate somehow over the image data and inspect the pixels: "Hey, are you transparent?". And if it is, I go on to the next one. "Hey, what's up with you? Transparent?". Until I get the result: "Nope! I'm totally opaque!". Then I know: "Right! Here the feet starts in the PNG! That's the boundary!".
我会以某种方式迭代图像数据并检查像素:“嘿,你透明吗?”。如果是的话,我会继续下一个。 “嘿,你怎么了?透明吗?”直到我得到结果:“不!我完全不透明!”。然后我知道:“对!这里的脚开始在PNG!这就是边界!”。
Then, I do that for every line and get some kind of path coordinates. Personally, I need this to detect the perfect point for rotation achsis, as I want to make it wiggle realistic on a floor. I can't just use the frame width and origin information for that. It would not be realistic.
然后,我为每一行做这个并得到某种路径坐标。就个人而言,我需要这个来检测旋转机器人的完美点,因为我想让它在地板上摆动。我不能只使用框架宽度和原始信息。这不太现实。
So: Is there a way to introspect the data of an UIImage or image at all, in order to check if a pixel is transparent or not?
那么:有没有办法反省UIImage或图像的数据,以检查像素是否透明?
3 个解决方案
#1
1
You might find it easier to create an in-memory context with a known format, then render the image into the in-memory context. Note that the destination context can be a different bitmap format and color mode than the original image, designed for easy reading.
您可能会发现使用已知格式创建内存上下文更容易,然后将图像渲染到内存中的上下文中。请注意,目标上下文可以是与原始图像不同的位图格式和颜色模式,旨在方便阅读。
Alternatively, if you're a glutton for punishment, you can get the CGImage property of the UIImage, then use the CoreGraphics functions to determine the pixel format, then write code to decode the pixel format data...
或者,如果你是一个贪婪的惩罚,你可以获得UIImage的CGImage属性,然后使用CoreGraphics函数来确定像素格式,然后编写代码来解码像素格式数据......
Last option, if you control the images being used: Create a mask image (1-bit alpha) from the image, then use that for determining where the edges are. You can create such a mask in a graphics editor easily, or you could probably do it by drawing the image into a 1-bit in-memory context, if you set the drawing properties correctly.
最后一个选项,如果您控制正在使用的图像:从图像创建一个蒙版图像(1位alpha),然后使用它来确定边缘的位置。您可以轻松地在图形编辑器中创建这样的蒙版,或者如果正确设置图形属性,则可以通过将图像绘制到1位内存上下文中来实现。
#2
3
I found this tutorial on the net where you can get the UIColor (ARGB) value of each pixel on the screen. maybe if you tweek that a little you can pass it an UIImage and get all the values that you need
我在网上找到了这个教程,你可以在屏幕上获得每个像素的UIColor(ARGB)值。也许如果你稍微调整一下你可以传递一个UIImage并获得你需要的所有值
#3
2
There appears not to be a reasonably easy way to do this using UIImage. Instead, I see two simpler options.
使用UIImage似乎没有一个相当简单的方法来做到这一点。相反,我看到两个更简单的选择。
Ideally, if you control the images being used, you can precalculate the data you need. Then you can either reformat images such that the point you're interested in is in the center of the image (which you can calculate easily client-side with only the width and height), or send the co-ordinate with the image. This saves the client having to recalculate for each image, which could potentially speed up your application and save battery life.
理想情况下,如果您控制正在使用的图像,则可以预先计算所需的数据。然后,您可以重新格式化图像,使您感兴趣的点位于图像的中心(您可以使用宽度和高度轻松地计算客户端),或者将坐标与图像一起发送。这样可以节省客户端为每张图像重新计算的时间,这可能会加快您的应用程序并延长电池寿命。
Alternatively, image libraries like libpng can be compiled statically into your program. You can load the image, do the processing, and then unload it and pass the file off into UIImage. Only the functions you use would be linked in, so you may be able to avoid adding too much bloat as the rendering routines might be omitted. This does have the disadvantage of making your software rely on third-party libraries.
或者,像libpng这样的图像库可以静态编译到您的程序中。您可以加载图像,进行处理,然后卸载它并将文件传递给UIImage。只会链接您使用的函数,因此您可以避免添加太多膨胀,因为可能会省略渲染例程。这样做的缺点是使您的软件依赖第三方库。
#1
1
You might find it easier to create an in-memory context with a known format, then render the image into the in-memory context. Note that the destination context can be a different bitmap format and color mode than the original image, designed for easy reading.
您可能会发现使用已知格式创建内存上下文更容易,然后将图像渲染到内存中的上下文中。请注意,目标上下文可以是与原始图像不同的位图格式和颜色模式,旨在方便阅读。
Alternatively, if you're a glutton for punishment, you can get the CGImage property of the UIImage, then use the CoreGraphics functions to determine the pixel format, then write code to decode the pixel format data...
或者,如果你是一个贪婪的惩罚,你可以获得UIImage的CGImage属性,然后使用CoreGraphics函数来确定像素格式,然后编写代码来解码像素格式数据......
Last option, if you control the images being used: Create a mask image (1-bit alpha) from the image, then use that for determining where the edges are. You can create such a mask in a graphics editor easily, or you could probably do it by drawing the image into a 1-bit in-memory context, if you set the drawing properties correctly.
最后一个选项,如果您控制正在使用的图像:从图像创建一个蒙版图像(1位alpha),然后使用它来确定边缘的位置。您可以轻松地在图形编辑器中创建这样的蒙版,或者如果正确设置图形属性,则可以通过将图像绘制到1位内存上下文中来实现。
#2
3
I found this tutorial on the net where you can get the UIColor (ARGB) value of each pixel on the screen. maybe if you tweek that a little you can pass it an UIImage and get all the values that you need
我在网上找到了这个教程,你可以在屏幕上获得每个像素的UIColor(ARGB)值。也许如果你稍微调整一下你可以传递一个UIImage并获得你需要的所有值
#3
2
There appears not to be a reasonably easy way to do this using UIImage. Instead, I see two simpler options.
使用UIImage似乎没有一个相当简单的方法来做到这一点。相反,我看到两个更简单的选择。
Ideally, if you control the images being used, you can precalculate the data you need. Then you can either reformat images such that the point you're interested in is in the center of the image (which you can calculate easily client-side with only the width and height), or send the co-ordinate with the image. This saves the client having to recalculate for each image, which could potentially speed up your application and save battery life.
理想情况下,如果您控制正在使用的图像,则可以预先计算所需的数据。然后,您可以重新格式化图像,使您感兴趣的点位于图像的中心(您可以使用宽度和高度轻松地计算客户端),或者将坐标与图像一起发送。这样可以节省客户端为每张图像重新计算的时间,这可能会加快您的应用程序并延长电池寿命。
Alternatively, image libraries like libpng can be compiled statically into your program. You can load the image, do the processing, and then unload it and pass the file off into UIImage. Only the functions you use would be linked in, so you may be able to avoid adding too much bloat as the rendering routines might be omitted. This does have the disadvantage of making your software rely on third-party libraries.
或者,像libpng这样的图像库可以静态编译到您的程序中。您可以加载图像,进行处理,然后卸载它并将文件传递给UIImage。只会链接您使用的函数,因此您可以避免添加太多膨胀,因为可能会省略渲染例程。这样做的缺点是使您的软件依赖第三方库。