Elsewhere on * a question was asked regarding a depthbuffer histogram - Create depth buffer histogram texture with GLSL.
在*的其他地方,有人询问深度缓冲直方图 - 用GLSL创建深度缓冲直方图纹理。
I am writing an iOS image-processing app and am intrigued by this question but unclear on the answer provided. So, is it possible to create an image histogram using the GPU via GLSL?
我正在编写一个iOS图像处理应用程序,并对此问题很感兴趣,但对提供的答案不清楚。那么,是否可以通过GLSL使用GPU创建图像直方图?
2 个解决方案
#1
0
Yes, it is. It's not clearly the best approach, but it's indeed the best one available in iOS, since OpenCL is not supported. You'll lose elegance, and your code will probably not as straightforward, but almost all OpenCL features can be achieved with shaders.
是的。这不是最好的方法,但它确实是iOS中最好的方法,因为不支持OpenCL。您将失去优雅,您的代码可能不会那么简单,但几乎所有OpenCL功能都可以通过着色器实现。
If it helps, DirectX11 comes with a FFT example for compute shaders. See DX11 August SDK Release Notes.
如果有帮助,DirectX11会附带一个用于计算着色器的FFT示例。请参阅DX11 August SDK发行说明。
#2
33
Yes, there is, although it's a little more challenging on iOS than you'd think. This is a red histogram generated and plotted entirely on the GPU, running against a live video feed:
是的,虽然在iOS上比你想象的更具挑战性。这是一个红色直方图,完全在GPU上生成并绘制,针对实时视频源运行:
Tommy's suggestion in the question you link is a great starting point, as is this paper by Scheuermann and Hensley. What's suggested there is to use scattering to build up a histogram for color channels in the image. Scattering is a process where you pass in a grid of points to your vertex shader, and then have that shader read the color at that point. The value of the desired color channel at that point is then written out as the X coordinate (with 0 for the Y and Z coordinates). Your fragment shader then draws out a translucent, 1-pixel-wide point at that coordinate in your target.
汤米在你所链接的问题中的建议是一个很好的起点,Scheuermann和Hensley的这篇论文也是如此。建议使用散射来为图像中的颜色通道建立直方图。散射是一个将点网格传递到顶点着色器的过程,然后让该着色器读取该点的颜色。然后将该点处所需颜色通道的值写为X坐标(Y和Z坐标为0)。然后,您的片段着色器会在目标中的该坐标处绘制一个半透明的1像素宽的点。
That target is a 1-pixel-tall, 256-pixel-wide image, with each width position representing one color bin. By writing out a point with a low alpha channel (or low RGB values) and then using additive blending, you can accumulate a higher value for each bin based on the number of times that specific color value occurs in the image. These histogram pixels can then be read for later processing.
该目标是1像素高,256像素宽的图像,每个宽度位置代表一个颜色箱。通过写出具有低Alpha通道(或低RGB值)的点,然后使用加性混合,您可以根据图像中特定颜色值出现的次数为每个bin累积更高的值。然后可以读取这些直方图像素以供稍后处理。
The major problem with doing this in shaders on iOS is that, despite reports to the contrary, Apple clearly states that texture reads in a vertex shader will not work on iOS. I tried this with all of my iOS 5.0 devices, and none of them were able to perform texture reads in a vertex shader (the screen just goes black, with no GL errors being thrown).
在iOS上的着色器中执行此操作的主要问题是,尽管有相反的报告,Apple明确指出顶点着色器中的纹理读取在iOS上不起作用。我尝试使用我的所有iOS 5.0设备,并且没有一个能够在顶点着色器中执行纹理读取(屏幕变黑,没有抛出GL错误)。
To work around this, I found that I could read the raw pixels of my input image (via glReadPixels()
or the faster texture caches) and pass those bytes in as vertex data with a GL_UNSIGNED_BYTE type. The following code accomplishes this:
为了解决这个问题,我发现我可以读取输入图像的原始像素(通过glReadPixels()或更快的纹理缓存),并将这些字节作为顶点数据传递给GL_UNSIGNED_BYTE类型。以下代码完成此操作:
glReadPixels(0, 0, inputTextureSize.width, inputTextureSize.height, GL_RGBA, GL_UNSIGNED_BYTE, vertexSamplingCoordinates);
[self setFilterFBO];
[filterProgram use];
glClearColor(0.0, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_ONE, GL_ONE);
glEnable(GL_BLEND);
glVertexAttribPointer(filterPositionAttribute, 4, GL_UNSIGNED_BYTE, 0, (_downsamplingFactor - 1) * 4, vertexSamplingCoordinates);
glDrawArrays(GL_POINTS, 0, inputTextureSize.width * inputTextureSize.height / (CGFloat)_downsamplingFactor);
glDisable(GL_BLEND);
In the above code, you'll notice that I employ a stride to only sample a fraction of the image pixels. This is because the lowest opacity or greyscale level you can write out is 1/256, meaning that each bin becomes maxed out once more than 255 pixels in that image have that color value. Therefore, I had to reduce the number of pixels processed in order to bring the range of the histogram within this limited window. I'm looking for a way to extend this dynamic range.
在上面的代码中,您会注意到我使用一个步幅来仅采样一小部分图像像素。这是因为您可以写出的最低不透明度或灰度级别是1/256,这意味着每个bin在该图像中具有该颜色值超过255个像素时最大化。因此,我不得不减少处理的像素数量,以使直方图的范围在这个有限的窗口内。我正在寻找一种方法来扩展这个动态范围。
The shaders used to do this are as follows, starting with the vertex shader:
用于执行此操作的着色器如下所示,从顶点着色器开始:
attribute vec4 position;
void main()
{
gl_Position = vec4(-1.0 + (position.x * 0.0078125), 0.0, 0.0, 1.0);
gl_PointSize = 1.0;
}
and finishing with the fragment shader:
并使用片段着色器完成:
uniform highp float scalingFactor;
void main()
{
gl_FragColor = vec4(scalingFactor);
}
A working implementation of this can be found in my open source GPUImage framework. Grab and run the FilterShowcase example to see the histogram analysis and plotting for yourself.
可以在我的开源GPUImage框架中找到这方面的工作实现。抓取并运行FilterShowcase示例以查看直方图分析并为自己绘图。
There are some performance issues with this implementation, but it was the only way I could think of doing this on-GPU on iOS. I'm open to other suggestions.
这个实现存在一些性能问题,但这是我想到在iOS上在GPU上执行此操作的唯一方法。我愿意接受其他建议。
#1
0
Yes, it is. It's not clearly the best approach, but it's indeed the best one available in iOS, since OpenCL is not supported. You'll lose elegance, and your code will probably not as straightforward, but almost all OpenCL features can be achieved with shaders.
是的。这不是最好的方法,但它确实是iOS中最好的方法,因为不支持OpenCL。您将失去优雅,您的代码可能不会那么简单,但几乎所有OpenCL功能都可以通过着色器实现。
If it helps, DirectX11 comes with a FFT example for compute shaders. See DX11 August SDK Release Notes.
如果有帮助,DirectX11会附带一个用于计算着色器的FFT示例。请参阅DX11 August SDK发行说明。
#2
33
Yes, there is, although it's a little more challenging on iOS than you'd think. This is a red histogram generated and plotted entirely on the GPU, running against a live video feed:
是的,虽然在iOS上比你想象的更具挑战性。这是一个红色直方图,完全在GPU上生成并绘制,针对实时视频源运行:
Tommy's suggestion in the question you link is a great starting point, as is this paper by Scheuermann and Hensley. What's suggested there is to use scattering to build up a histogram for color channels in the image. Scattering is a process where you pass in a grid of points to your vertex shader, and then have that shader read the color at that point. The value of the desired color channel at that point is then written out as the X coordinate (with 0 for the Y and Z coordinates). Your fragment shader then draws out a translucent, 1-pixel-wide point at that coordinate in your target.
汤米在你所链接的问题中的建议是一个很好的起点,Scheuermann和Hensley的这篇论文也是如此。建议使用散射来为图像中的颜色通道建立直方图。散射是一个将点网格传递到顶点着色器的过程,然后让该着色器读取该点的颜色。然后将该点处所需颜色通道的值写为X坐标(Y和Z坐标为0)。然后,您的片段着色器会在目标中的该坐标处绘制一个半透明的1像素宽的点。
That target is a 1-pixel-tall, 256-pixel-wide image, with each width position representing one color bin. By writing out a point with a low alpha channel (or low RGB values) and then using additive blending, you can accumulate a higher value for each bin based on the number of times that specific color value occurs in the image. These histogram pixels can then be read for later processing.
该目标是1像素高,256像素宽的图像,每个宽度位置代表一个颜色箱。通过写出具有低Alpha通道(或低RGB值)的点,然后使用加性混合,您可以根据图像中特定颜色值出现的次数为每个bin累积更高的值。然后可以读取这些直方图像素以供稍后处理。
The major problem with doing this in shaders on iOS is that, despite reports to the contrary, Apple clearly states that texture reads in a vertex shader will not work on iOS. I tried this with all of my iOS 5.0 devices, and none of them were able to perform texture reads in a vertex shader (the screen just goes black, with no GL errors being thrown).
在iOS上的着色器中执行此操作的主要问题是,尽管有相反的报告,Apple明确指出顶点着色器中的纹理读取在iOS上不起作用。我尝试使用我的所有iOS 5.0设备,并且没有一个能够在顶点着色器中执行纹理读取(屏幕变黑,没有抛出GL错误)。
To work around this, I found that I could read the raw pixels of my input image (via glReadPixels()
or the faster texture caches) and pass those bytes in as vertex data with a GL_UNSIGNED_BYTE type. The following code accomplishes this:
为了解决这个问题,我发现我可以读取输入图像的原始像素(通过glReadPixels()或更快的纹理缓存),并将这些字节作为顶点数据传递给GL_UNSIGNED_BYTE类型。以下代码完成此操作:
glReadPixels(0, 0, inputTextureSize.width, inputTextureSize.height, GL_RGBA, GL_UNSIGNED_BYTE, vertexSamplingCoordinates);
[self setFilterFBO];
[filterProgram use];
glClearColor(0.0, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_ONE, GL_ONE);
glEnable(GL_BLEND);
glVertexAttribPointer(filterPositionAttribute, 4, GL_UNSIGNED_BYTE, 0, (_downsamplingFactor - 1) * 4, vertexSamplingCoordinates);
glDrawArrays(GL_POINTS, 0, inputTextureSize.width * inputTextureSize.height / (CGFloat)_downsamplingFactor);
glDisable(GL_BLEND);
In the above code, you'll notice that I employ a stride to only sample a fraction of the image pixels. This is because the lowest opacity or greyscale level you can write out is 1/256, meaning that each bin becomes maxed out once more than 255 pixels in that image have that color value. Therefore, I had to reduce the number of pixels processed in order to bring the range of the histogram within this limited window. I'm looking for a way to extend this dynamic range.
在上面的代码中,您会注意到我使用一个步幅来仅采样一小部分图像像素。这是因为您可以写出的最低不透明度或灰度级别是1/256,这意味着每个bin在该图像中具有该颜色值超过255个像素时最大化。因此,我不得不减少处理的像素数量,以使直方图的范围在这个有限的窗口内。我正在寻找一种方法来扩展这个动态范围。
The shaders used to do this are as follows, starting with the vertex shader:
用于执行此操作的着色器如下所示,从顶点着色器开始:
attribute vec4 position;
void main()
{
gl_Position = vec4(-1.0 + (position.x * 0.0078125), 0.0, 0.0, 1.0);
gl_PointSize = 1.0;
}
and finishing with the fragment shader:
并使用片段着色器完成:
uniform highp float scalingFactor;
void main()
{
gl_FragColor = vec4(scalingFactor);
}
A working implementation of this can be found in my open source GPUImage framework. Grab and run the FilterShowcase example to see the histogram analysis and plotting for yourself.
可以在我的开源GPUImage框架中找到这方面的工作实现。抓取并运行FilterShowcase示例以查看直方图分析并为自己绘图。
There are some performance issues with this implementation, but it was the only way I could think of doing this on-GPU on iOS. I'm open to other suggestions.
这个实现存在一些性能问题,但这是我想到在iOS上在GPU上执行此操作的唯一方法。我愿意接受其他建议。