如何使用苹果核心图像与opengl es

时间:2022-01-01 05:14:34

So I have a texture (in a framebuffer) in my opengl es game, and I have been wanting to blur it. I have been trying many methods to see which one will get me the best FPS as this needs to happen real-time as the texture is constantly changing.

所以我在我的opengl es游戏中有一个纹理(在一个帧缓冲区中),我一直想模糊它。我一直在尝试很多方法,看看哪一个会让我获得最好的FPS,因为这需要实时发生,因为纹理在不断变化。

How exactly could I take a opengles texture and send it to coreImage, then back to opengles for display?

我怎样才能获得一个opengles纹理并将其发送到coreImage,然后回到opengles进行显示?

This is some code that basically you put in a UIImage and it returns a blurred one. Where I am stuck is getting that texture to a UIImage, and I am wondering if their is a better way then loading in the new UIImage each time.

这是一些基本上放在UIImage中的代码,它返回一个模糊的代码。我遇到的问题是将纹理变为UIImage,我想知道它们是否是一种更好的方式,然后每次加载到新的UIImage中。

- (UIImage *)blurredImageWithImage:(UIImage *)sourceImage{

    //  Create our blurred image
    CIContext *context = [CIContext contextWithOptions:nil];
    CIImage *inputImage = [CIImage imageWithCGImage:sourceImage.CGImage];

    //  Setting up Gaussian Blur
    CIFilter *filter = [CIFilter filterWithName:@"CIGaussianBlur"];
    [filter setValue:inputImage forKey:kCIInputImageKey];
    [filter setValue:[NSNumber numberWithFloat:15.0f] forKey:@"inputRadius"];
    CIImage *result = [filter valueForKey:kCIOutputImageKey];

    /*  CIGaussianBlur has a tendency to shrink the image a little, this ensures it matches 
     *  up exactly to the bounds of our original image */
    CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];

    UIImage *retVal = [UIImage imageWithCGImage:cgImage];
    return retVal;
}

As shown by this graph core image could potentially be even better then the most efficient methods. 如何使用苹果核心图像与opengl es

如此图所示,核心图像可能比最有效的方法更好。

1 个解决方案

#1


2  

In case you haven't tried and noticed already, the method you quoted will kill performance — to create a UIImage from a GL texture, you need to read a bunch of pixels off the GPU and into CPU memory. CoreImage might then send it to the GPU to perform its effects, but since you're getting a UIImage out of that, you're reading back from the GPU again, before sending your image back into the GPU for use as a texture. That's four data transfers per frame, and each one is expensive.

如果你还没有尝试过,并已经注意到了,你报会杀了性能的方法 - 从GL纹理创建一个UIImage,你需要阅读的一组像素关闭GPU和CPU到内存。然后CoreImage可能将其发送至GPU来执行它的效果,但因为你得到一个UIImage出来的,你正在阅读从GPU回来,送你的形象回到GPU用作纹理之前。这是每帧四次数据传输,每一次都很昂贵。

Instead, keep Core Image on the GPU, and don't use intermediate UIImages.

相反,将Core Image保留在GPU上,不要使用中间UIImages。

  1. You'll need to keep a CIContext around for rendering, so create one early on from your GL context with contextWithEAGLContext: and hang onto it.

    你需要保留一个CIContext来渲染,所以在你的GL上下文中使用contextWithEAGLContext创建一个:并挂起它。

  2. Use imageWithTexture:size:flipped:colorSpace: to create a CIImage from your GL texture.

    使用imageWithTexture:size:翻转:colorSpace:从GL纹理创建CIImage。

  3. When you want to render the result of a filter chain to a GL texture, bind it with glBindFramebuffer (just as you would if you were making your own render-to-texture pass), and draw the filter's output CIImage in your CIContext with drawImage:inRect:fromRect:. (And don't forget to rebind that texture before using it to perform other GL rendering.)

    当你想将滤镜链的结果渲染到GL纹理时,将它与glBindFramebuffer绑定(就像你自己渲染渲染到纹理时一样),并使用drawImage在CIContext中绘制滤镜的输出CIImage :inRect:fromRect :. (并且在使用它来执行其他GL渲染之前不要忘记重新绑定该纹理。)

This is pretty well covered in a WWDC talk from way back in 2012: Core Image Techniques. The discussion starts around 40:00 with code and demo following through about 46:05. But stick around for the rest of the talk after that and you'll get lots more demos, inspiration, code and architecture tips for fun things you can do with CoreImage in GL to make a game look awesome.

这在2012年的WWDC演讲中得到了很好的体现:核心图像技术。讨论开始于40:00左右,代码和演示通过大约46:05进行。但是在那之后继续讨论剩下的话题,你将获得更多的演示,灵感,代码和架构技巧,你可以用GL中的CoreImage做有趣的事情,让游戏看起来很棒。

Also, if you render using Metal (????), there are corresponding CoreImage APIs for doing the same workflow there (contextWithMTLDevice: and imageWithMTLTexture:).

此外,如果使用Metal(????)进行渲染,则还有相应的CoreImage API用于执行相同的工作流程(contextWithMTLDevice:和imageWithMTLTexture :)。

#1


2  

In case you haven't tried and noticed already, the method you quoted will kill performance — to create a UIImage from a GL texture, you need to read a bunch of pixels off the GPU and into CPU memory. CoreImage might then send it to the GPU to perform its effects, but since you're getting a UIImage out of that, you're reading back from the GPU again, before sending your image back into the GPU for use as a texture. That's four data transfers per frame, and each one is expensive.

如果你还没有尝试过,并已经注意到了,你报会杀了性能的方法 - 从GL纹理创建一个UIImage,你需要阅读的一组像素关闭GPU和CPU到内存。然后CoreImage可能将其发送至GPU来执行它的效果,但因为你得到一个UIImage出来的,你正在阅读从GPU回来,送你的形象回到GPU用作纹理之前。这是每帧四次数据传输,每一次都很昂贵。

Instead, keep Core Image on the GPU, and don't use intermediate UIImages.

相反,将Core Image保留在GPU上,不要使用中间UIImages。

  1. You'll need to keep a CIContext around for rendering, so create one early on from your GL context with contextWithEAGLContext: and hang onto it.

    你需要保留一个CIContext来渲染,所以在你的GL上下文中使用contextWithEAGLContext创建一个:并挂起它。

  2. Use imageWithTexture:size:flipped:colorSpace: to create a CIImage from your GL texture.

    使用imageWithTexture:size:翻转:colorSpace:从GL纹理创建CIImage。

  3. When you want to render the result of a filter chain to a GL texture, bind it with glBindFramebuffer (just as you would if you were making your own render-to-texture pass), and draw the filter's output CIImage in your CIContext with drawImage:inRect:fromRect:. (And don't forget to rebind that texture before using it to perform other GL rendering.)

    当你想将滤镜链的结果渲染到GL纹理时,将它与glBindFramebuffer绑定(就像你自己渲染渲染到纹理时一样),并使用drawImage在CIContext中绘制滤镜的输出CIImage :inRect:fromRect :. (并且在使用它来执行其他GL渲染之前不要忘记重新绑定该纹理。)

This is pretty well covered in a WWDC talk from way back in 2012: Core Image Techniques. The discussion starts around 40:00 with code and demo following through about 46:05. But stick around for the rest of the talk after that and you'll get lots more demos, inspiration, code and architecture tips for fun things you can do with CoreImage in GL to make a game look awesome.

这在2012年的WWDC演讲中得到了很好的体现:核心图像技术。讨论开始于40:00左右,代码和演示通过大约46:05进行。但是在那之后继续讨论剩下的话题,你将获得更多的演示,灵感,代码和架构技巧,你可以用GL中的CoreImage做有趣的事情,让游戏看起来很棒。

Also, if you render using Metal (????), there are corresponding CoreImage APIs for doing the same workflow there (contextWithMTLDevice: and imageWithMTLTexture:).

此外,如果使用Metal(????)进行渲染,则还有相应的CoreImage API用于执行相同的工作流程(contextWithMTLDevice:和imageWithMTLTexture :)。