I'm drawing planets in OpenGL ES, and running into some interesting performance issues. The general question is: how best to render "hugely detailed" textures on a sphere?
我在OpenGL ES中绘制行星,并遇到一些有趣的性能问题。一般的问题是:如何最好地在球体上渲染“非常详细”的纹理?
(the sphere is guaranteed; I'm interested in sphere-specific optimizations)
(球体是有保证的;我对球体特定的优化感兴趣)
Base case:
基本情况:
- Window is approx. 2048 x 1536 (e.g. iPad3)
- 窗口约。 2048 x 1536(例如iPad3)
- Texture map for globe is 24,000 x 12,000 pixels (an area half the size of USA fits the full width of screen)
- 地球的纹理贴图是24,000 x 12,000像素(美国大小的一半区域适合屏幕的整个宽度)
- Globe is displayed at everything from zoomed in (USA fills screen) to zoomed out (whole globe visible)
- 从放大(美国填充屏幕)到缩小(全球可见)的所有内容都显示地球仪
- I need a MINIMUM of 3 texture layers (1 for the planet surface, 1 for day/night differences, 1 for user-interface (hilighting different regions)
- 我需要最少3个纹理层(1个用于行星表面,1个用于日/夜差异,1个用于用户界面(hilighting不同区域)
- Some of the layers are animated (i.e. they have to load and drop their texture at runtime, rapidly)
- 一些层是动画的(即它们必须在运行时快速加载和删除它们的纹理)
Limitations:
限制:
- top-end tablets are limited to 4096x4096 textures
- *平板电脑限制为4096x4096纹理
- top-end tablets are limited to 8 simultaneous texture units
- *平板电脑限于8个同步纹理单元
Problems:
问题:
- In total, it's naively 500 million pixels of texture data
- 总的来说,这是天真的500万像素的纹理数据
- Splitting into smaller textures doesn't work well because devices only have 8 units; with only a single texture layer, I could split into 8 texture units and all textures would be less than 4096x4096 - but that only allows a single layer
- 分割成较小的纹理不能很好地工作,因为设备只有8个单位;只有一个纹理图层,我可以分成8个纹理单元,所有纹理都小于4096x4096 - 但这只允许单个图层
- Rendering the layers as separate geometry works poorly because they need to be blended using fragment-shaders
- 将图层渲染为单独的几何图形效果很差,因为它们需要使用片段着色器进行混合
...at the moment, the only idea I have that sounds viable is:
......目前,我认为唯一可行的想法是:
- split the sphere into NxM "pieces of sphere" and render each one as separate geometry
- 将球体分成NxM“球体块”并将每个球体渲染为单独的几何体
- use mipmaps to render low-res textures when zoomed out
- 缩小时使用mipmap来渲染低分辨率纹理
- ...rely on simple culling to cut out most of them when zoomed in, and mipmapping to use small(er) textures when they can't be culled
- ...当放大时,依靠简单的剔除来切割大部分,并且当它们无法被剔除时,使用小的(呃)纹理进行mipmapping
...but it seems there ought to be an easier way / better options?
...但似乎应该有更简单的方法/更好的选择?
2 个解决方案
#1
0
Seems that there are no way to fit such huge textures in memory of mobile GPU, even into the iPad 3 one.
似乎没有办法在移动GPU的内存中安装如此巨大的纹理,甚至进入iPad 3。
So you have to stream texture data. The thing you need is called clipmap (popularized by id software with extended megatexture technology).
所以你必须传输纹理数据。你需要的东西叫做clipmap(由带有扩展megatexture技术的id软件推广)。
Please read about this here, there are links to docs describing technique: http://en.wikipedia.org/wiki/Clipmap
请在这里阅读,有链接到描述技术的文档:http://en.wikipedia.org/wiki/Clipmap
#2
0
This is not easily done in ES, as there is no virtual texture extension (yet). You basically need to implement virtual texturing (some ES devices implement ARB_texture_array) and stream in the lowest resolution possible (view-dependent) for your sphere. That way, it is possible to do it all in a fragment shader, no geometry subdivision is required. See this presentation (and the paper) for details how this can be implemented.
这在ES中不容易完成,因为还没有虚拟纹理扩展。您基本上需要实现虚拟纹理(某些ES设备实现ARB_texture_array)并以可能的最低分辨率(视图相关)为您的球体进行流式传输。这样,可以在片段着色器中完成所有操作,不需要几何细分。有关如何实施此操作的详细信息,请参阅此演示文稿(和文章)。
If you do the math, it is simply impossible to stream 1 GB (24,000 x 12,000 pixels x 4 B) in real time. And it would be wasteful, too, as the user will never get to see it all at the same time.
如果进行数学计算,则无法实时传输1 GB(24,000 x 12,000像素x 4 B)。这也是浪费,因为用户永远不会同时看到它。
#1
0
Seems that there are no way to fit such huge textures in memory of mobile GPU, even into the iPad 3 one.
似乎没有办法在移动GPU的内存中安装如此巨大的纹理,甚至进入iPad 3。
So you have to stream texture data. The thing you need is called clipmap (popularized by id software with extended megatexture technology).
所以你必须传输纹理数据。你需要的东西叫做clipmap(由带有扩展megatexture技术的id软件推广)。
Please read about this here, there are links to docs describing technique: http://en.wikipedia.org/wiki/Clipmap
请在这里阅读,有链接到描述技术的文档:http://en.wikipedia.org/wiki/Clipmap
#2
0
This is not easily done in ES, as there is no virtual texture extension (yet). You basically need to implement virtual texturing (some ES devices implement ARB_texture_array) and stream in the lowest resolution possible (view-dependent) for your sphere. That way, it is possible to do it all in a fragment shader, no geometry subdivision is required. See this presentation (and the paper) for details how this can be implemented.
这在ES中不容易完成,因为还没有虚拟纹理扩展。您基本上需要实现虚拟纹理(某些ES设备实现ARB_texture_array)并以可能的最低分辨率(视图相关)为您的球体进行流式传输。这样,可以在片段着色器中完成所有操作,不需要几何细分。有关如何实施此操作的详细信息,请参阅此演示文稿(和文章)。
If you do the math, it is simply impossible to stream 1 GB (24,000 x 12,000 pixels x 4 B) in real time. And it would be wasteful, too, as the user will never get to see it all at the same time.
如果进行数学计算,则无法实时传输1 GB(24,000 x 12,000像素x 4 B)。这也是浪费,因为用户永远不会同时看到它。