I am new to OpenGL, and trying to learn ES 2.0.
我是OpenGL的新手,并且正在尝试学习ES 2.0。
To start with, I am working on a card game, where I need to render multiple card images. I followed this http://www.learnopengles.com/android-lesson-four-introducing-basic-texturing/
首先,我正在制作纸牌游戏,我需要渲染多张卡片图像。我按照http://www.learnopengles.com/android-lesson-four-introducing-basic-texturing/
I have created a few classes to handle the data and actions.
我创建了一些类来处理数据和操作。
- MySprite holds the texture information, including the location and scale factors.
- Batcher draws all the sprites in one go. It is rough implementation.
- ShaderHelper manages creation of shaders and linking them to a program.
- GLRenderer is where the rendering is handled (it implements `Renderer`.)
MySprite保存纹理信息,包括位置和比例因子。
Batcher一气呵成地吸引所有的精灵。这是粗略的实施。
ShaderHelper管理着色器的创建并将它们链接到程序。
GLRenderer是处理渲染的地方(它实现了`Renderer`。)
Q1
My program renders one image correctly. Problem is that when I render 2 images, first one is replaced by the later one in its place, hence second one is rendered twice.
我的程序正确呈现一个图像。问题是当我渲染2个图像时,第一个被替换为后一个图像,因此第二个渲染两次。
I suspect it is something related to how I create textures in MySprite
class. But I am not sure why. Can you help?
我怀疑这与我在MySprite类中如何创建纹理有关。但我不确定为什么。你能帮我吗?
Q2
I read that if I have to render 2 images, I need to use GL_TEXTURE0
and GL_TEXTURE1
, instead of just using GL_TEXTURE0
.
我读到如果我必须渲染2个图像,我需要使用GL_TEXTURE0和GL_TEXTURE1,而不是仅使用GL_TEXTURE0。
_GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
But since these constants are limited (0 to 31), is there a better way to render more than 32 small images without losing the images' uniqueness?
但由于这些常数是有限的(0到31),有没有更好的方法来渲染超过32个小图像而不会丢失图像的唯一性?
Please point me to the right direction.
请指出正确的方向。
The code
GLRenderer:
public class GLRenderer implements Renderer {
ArrayList<MySprite> images = new ArrayList<MySprite>();
Batcher batch;
int x = 0;
...
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
batch = new Batcher();
MySprite s = MySprite.createGLSprite(mContext.getAssets(), "menu/back.png");
images.add(s);
s.XScale = 2;
s.YScale = 3;
images.add(MySprite.createGLSprite(mContext.getAssets(), "menu/play.png"));
// Set the clear color to black
GLES20.glClearColor(0.0f, 0.0f, 0.0f, 1);
ShaderHelper.initGlProgram();
}
@Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
mScreenWidth = width;
mScreenHeight = height;
// Redo the Viewport, making it fullscreen.
GLES20.glViewport(0, 0, mScreenWidth, mScreenHeight);
batch.setScreenDimension(width, height);
// Set our shader programm
GLES20.glUseProgram(ShaderHelper.programTexture);
}
@Override
public void onDrawFrame(GL10 unused) {
// clear Screen and Depth Buffer, we have set the clear color as black.
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
batch.begin();
int y = 0;
for (MySprite s : images) {
s.X = x;
s.Y = y;
batch.draw(s);
y += 200;
}
batch.end();
x += 1;
}
}
Batcher:
public class Batcher {
// Store the model matrix. This matrix is used to move models from object space (where each model can be thought
// of being located at the center of the universe) to world space.
private final float[] mtrxModel = new float[16];
// Store the projection matrix. This is used to project the scene onto a 2D viewport.
private static final float[] mtrxProjection = new float[16];
// Allocate storage for the final combined matrix. This will be passed into the shader program.
private final float[] mtrxMVP = new float[16];
// Create our UV coordinates.
static float[] uvArray = new float[]{
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f
};
static FloatBuffer uvBuffer;
static FloatBuffer vertexBuffer;
static boolean staticInitialized = false;
static short[] indices = new short[]{0, 1, 2, 0, 2, 3}; // The order of vertexrendering.
static ShortBuffer indicesBuffer;
ArrayList<MySprite> sprites = new ArrayList<MySprite>();
public Batcher() {
if (!staticInitialized) {
// The texture buffer
uvBuffer = ByteBuffer.allocateDirect(uvArray.length * 4)
.order(ByteOrder.nativeOrder())
.asFloatBuffer();
uvBuffer.put(uvArray)
.position(0);
// initialize byte buffer for the draw list
indicesBuffer = ByteBuffer.allocateDirect(indices.length * 2)
.order(ByteOrder.nativeOrder())
.asShortBuffer();
indicesBuffer.put(indices)
.position(0);
float[] vertices = new float[] {
0, 0, 0,
0, 1, 0,
1, 1, 0,
1, 0, 0
};
// The vertex buffer.
vertexBuffer = ByteBuffer.allocateDirect(vertices.length * 4)
.order(ByteOrder.nativeOrder())
.asFloatBuffer();
vertexBuffer.put(vertices)
.position(0);
staticInitialized = true;
}
}
public void setScreenDimension(int screenWidth, int screenHeight) {
Matrix.setIdentityM(mtrxProjection, 0);
// (0,0)--->
// |
// v
//I want it to be more natural like desktop screen
Matrix.orthoM(mtrxProjection, 0,
-1f, screenWidth,
screenHeight, -1f,
-1f, 1f);
}
public void begin() {
sprites.clear();
}
public void draw(MySprite sprite) {
sprites.add(sprite);
}
public void end() {
// Get handle to shape's transformation matrix
int u_MVPMatrix = GLES20.glGetUniformLocation(ShaderHelper.programTexture, "u_MVPMatrix");
int a_Position = GLES20.glGetAttribLocation(ShaderHelper.programTexture, "a_Position");
int a_texCoord = GLES20.glGetAttribLocation(ShaderHelper.programTexture, "a_texCoord");
int u_texture = GLES20.glGetUniformLocation(ShaderHelper.programTexture, "u_texture");
GLES20.glEnableVertexAttribArray(a_Position);
GLES20.glEnableVertexAttribArray(a_texCoord);
//loop all sprites
for (int i = 0; i < sprites.size(); i++) {
MySprite ms = sprites.get(i);
// Matrix op - start
Matrix.setIdentityM(mtrxMVP, 0);
Matrix.setIdentityM(mtrxModel, 0);
Matrix.translateM(mtrxModel, 0, ms.X, ms.Y, 0f);
Matrix.scaleM(mtrxModel, 0, ms.getWidth() * ms.XScale, ms.getHeight() * ms.YScale, 0f);
Matrix.multiplyMM(mtrxMVP, 0, mtrxModel, 0, mtrxMVP, 0);
Matrix.multiplyMM(mtrxMVP, 0, mtrxProjection, 0, mtrxMVP, 0);
// Matrix op - end
// Pass the data to shaders - start
// Prepare the triangle coordinate data
GLES20.glVertexAttribPointer(a_Position, 3, GLES20.GL_FLOAT, false, 0, vertexBuffer);
// Prepare the texturecoordinates
GLES20.glVertexAttribPointer(a_texCoord, 2, GLES20.GL_FLOAT, false, 0, uvBuffer);
GLES20.glUniformMatrix4fv(u_MVPMatrix, 1, false, mtrxMVP, 0);
// Set the sampler texture unit to where we have saved the texture.
GLES20.glUniform1i(u_texture, ms.getTextureId());
// Pass the data to shaders - end
// Draw the triangles
GLES20.glDrawElements(GLES20.GL_TRIANGLES, indices.length, GLES20.GL_UNSIGNED_SHORT, indicesBuffer);
}
}
}
ShaderHelper
public class ShaderHelper {
static final String vs_Image =
"uniform mat4 u_MVPMatrix;" +
"attribute vec4 a_Position;" +
"attribute vec2 a_texCoord;" +
"varying vec2 v_texCoord;" +
"void main() {" +
" gl_Position = u_MVPMatrix * a_Position;" +
" v_texCoord = a_texCoord;" +
"}";
static final String fs_Image =
"precision mediump float;" +
"uniform sampler2D u_texture;" +
"varying vec2 v_texCoord;" +
"void main() {" +
" gl_FragColor = texture2D(u_texture, v_texCoord);" +
"}";
// Program variables
public static int programTexture;
public static int vertexShaderImage, fragmentShaderImage;
public static int loadShader(int type, String shaderCode){
// create a vertex shader type (GLES20.GL_VERTEX_SHADER)
// or a fragment shader type (GLES20.GL_FRAGMENT_SHADER)
int shader = GLES20.glCreateShader(type);
// add the source code to the shader and compile it
GLES20.glShaderSource(shader, shaderCode);
GLES20.glCompileShader(shader);
// return the shader
return shader;
}
public static void initGlProgram() {
// Create the shaders, images
vertexShaderImage = ShaderHelper.loadShader(GLES20.GL_VERTEX_SHADER, ShaderHelper.vs_Image);
fragmentShaderImage = ShaderHelper.loadShader(GLES20.GL_FRAGMENT_SHADER, ShaderHelper.fs_Image);
ShaderHelper.programTexture = GLES20.glCreateProgram(); // create empty OpenGL ES Program
GLES20.glAttachShader(ShaderHelper.programTexture, vertexShaderImage); // add the vertex shader to program
GLES20.glAttachShader(ShaderHelper.programTexture, fragmentShaderImage); // add the fragment shader to program
GLES20.glLinkProgram(ShaderHelper.programTexture); // creates OpenGL ES program executables
}
public static void dispose() {
GLES20.glDetachShader(ShaderHelper.programTexture, ShaderHelper.vertexShaderImage);
GLES20.glDetachShader(ShaderHelper.programTexture, ShaderHelper.fragmentShaderImage);
GLES20.glDeleteShader(ShaderHelper.fragmentShaderImage);
GLES20.glDeleteShader(ShaderHelper.vertexShaderImage);
GLES20.glDeleteProgram(ShaderHelper.programTexture);
}
}
MySprite
public class MySprite {
public int X, Y;
public float XScale, YScale;
private int w, h;
int textureId = -1;
private MySprite(Bitmap bmp, int textureId) {
this.w = bmp.getWidth();
this.h = bmp.getHeight();
this.textureId = textureId;
this.XScale = this.YScale = 1f;
}
public static MySprite createGLSprite(final AssetManager assets, final String assetImagePath) {
Bitmap bmp = TextureHelper.getBitmapFromAsset(assets, assetImagePath);
if (bmp == null) return null;
MySprite ms = new MySprite(bmp, createGlTexture());
Log.d("G1", "image id = " + ms.getTextureId());
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
// Load the bitmap into the bound texture.
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bmp, 0);
bmp.recycle();
return ms;
}
private static int createGlTexture() {
// Generate Textures, if more needed, alter these numbers.
final int[] textureHandles = new int[1];
GLES20.glGenTextures(1, textureHandles, 0);
if (textureHandles[0] != 0) {
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandles[0]);
return textureHandles[0];
} else {
throw new RuntimeException("Error loading texture.");
}
}
...
}
2 个解决方案
#1
Your code mixes up two concepts: texture ids (or, as they are called in the official OpenGL documentation, texture names), and texture units:
您的代码混合了两个概念:纹理ID(或者,在官方OpenGL文档中调用它们,纹理名称)和纹理单元:
- A texture id is a unique id for each texture object, where a texture object owns the actual data, as well as sampling parameters. You can have a virtually unlimited number of texture objects, with the practical limit typically being the amount of memory on your machine.
- A texture unit is an entry in a table of textures that are currently bound, and available to be sampled by a shader. The maximum size of this table is an implementation dependent limit, which can be queried with
glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, ...)
. The guaranteed minimum for compliant ES 2.0 implementations is 8.
纹理id是每个纹理对象的唯一ID,其中纹理对象拥有实际数据以及采样参数。您可以拥有几乎无限数量的纹理对象,实际限制通常是计算机上的内存量。
纹理单元是当前绑定的纹理表中的条目,可供着色器采样。此表的最大大小是依赖于实现的限制,可以使用glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS,...)查询。符合标准的ES 2.0实现的保证最小值为8。
You're using texture ids correctly while creating your textures, by generating an id with glGenTextures()
, binding it with glBindTexture()
, and then setting up the texture.
通过使用glGenTextures()生成id,将其与glBindTexture()绑定,然后设置纹理,您可以在创建纹理时正确使用纹理ID。
The problem is where you set up the textures for drawing:
问题在于您为绘图设置纹理:
GLES20.glUniform1i(u_texture, ms.getTextureId());
The value of the sampler uniform is not a texture id, it is the index of a texture unit. You then need to bind the texture you want to use to the texture unit you specify.
采样器均匀值不是纹理id,而是纹理单元的索引。然后,您需要将要使用的纹理绑定到指定的纹理单元。
Using texture unit 0, the correct code looks like this:
使用纹理单元0,正确的代码如下所示:
GLES20.glUniform1i(u_texture, 0);
GLES20.glActiveTexture(GL_TEXTURE0);
GLES20.glBindTexture(ms.getTextureId());
A few remarks on this code sequence:
关于此代码序列的一些评论:
- Note that the uniform value is the index of the texture unit (
0
), while the argument ofglActiveTexture()
is the corresponding enum (GL_TEXTURE0
). That's because... it was defined that way. Unfortunate API design, IMHO, but you just need to be aware of it. -
glBindTexture()
binds the texture to the currently active texture unit, so it needs to come afterglActiveTexture()
. - The
glActiveTexture()
call is not really needed if you only ever use one texture.GL_TEXTURE0
is the default value. I put it there to illustrate how the connection between texture unit and texture id is established.
注意,uniform值是纹理单元(0)的索引,而glActiveTexture()的参数是对应的枚举(GL_TEXTURE0)。那是因为......它是这样定义的。不幸的API设计,恕我直言,但你只需要了解它。
glBindTexture()将纹理绑定到当前活动的纹理单元,因此它需要在glActiveTexture()之后。
如果您只使用一个纹理,则不需要glActiveTexture()调用。 GL_TEXTURE0是默认值。我把它放在那里来说明纹理单元和纹理id之间的连接是如何建立的。
Multiple texture units are used if you want to sample multiple textures in the same shader.
如果要在同一着色器中对多个纹理进行采样,则使用多个纹理单位。
#2
To begin I'll point out some general things about OpenGL:
首先,我将指出一些关于OpenGL的一般事项:
Each texture is a large square image. Loading that image into the gpu's memory takes time, as in you can't actively swap images into gpu's texture memory and hope for a fast run time.
每个纹理都是一个大的方形图像。将该图像加载到gpu的内存中需要花费时间,因为您无法主动将图像交换到gpu的纹理内存中,并希望快速运行。
Q1: The reason only the second image is showing is because of this line in your sprite class:
Q1:只显示第二张图片的原因是因为你的精灵类中的这一行:
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bmp, 0);
You call that twice, therefore texture0 is replaced by the second image, and only that image is called.
你调用两次,因此texture0被第二个图像替换,只调用该图像。
To combat this, developers load a single image that contains a lot of smaller images in it, aka a texture map. The size of the image that can be loaded largely depends on the gpu. Android devices range roughly from 1024^2 pixels to 4096^2 pixels.
为了解决这个问题,开发人员加载了一个包含大量较小图像的图像,即纹理贴图。可以加载的图像大小很大程度上取决于gpu。 Android设备的范围大约从1024 ^ 2像素到4096 ^ 2像素。
To use a smaller part of the texture for a sprite, you have to manually define the uvArray that is in your batcher class.
要为精灵使用较小部分纹理,您必须手动定义batcher类中的uvArray。
Let's imagine our texture has 4 images divided as follows:
让我们假设我们的纹理有4个图像分为如下:
(0.0, 0.0) top left _____ (1.0, 0.0) top right
|__|__| middle of the square is (0.5, 0.5) middle
(0.0, 1.0) bot left |__|__|(1.0, 1.0) bot right
That means the uv values for the top left image are:
这意味着左上角图像的uv值为:
static float[] uvArray = new float[]{
0.0f, 0.0f, //top left
0.0f, 0.5f, //bot left
0.5f, 0.5f, //bot right
0.5f, 0.0f //top right
};
This way you just quadrupled the amount of sprites you can have on a texture.
这样,您只需将纹理上的精灵数量增加四倍。
Because of this you will have to pass no only which texture the sprite is on, but also it's custom uvs that the batcher should use.
因此,你不仅要传递精灵所在的纹理,还要传递它应该使用的自定义uvs。
#1
Your code mixes up two concepts: texture ids (or, as they are called in the official OpenGL documentation, texture names), and texture units:
您的代码混合了两个概念:纹理ID(或者,在官方OpenGL文档中调用它们,纹理名称)和纹理单元:
- A texture id is a unique id for each texture object, where a texture object owns the actual data, as well as sampling parameters. You can have a virtually unlimited number of texture objects, with the practical limit typically being the amount of memory on your machine.
- A texture unit is an entry in a table of textures that are currently bound, and available to be sampled by a shader. The maximum size of this table is an implementation dependent limit, which can be queried with
glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, ...)
. The guaranteed minimum for compliant ES 2.0 implementations is 8.
纹理id是每个纹理对象的唯一ID,其中纹理对象拥有实际数据以及采样参数。您可以拥有几乎无限数量的纹理对象,实际限制通常是计算机上的内存量。
纹理单元是当前绑定的纹理表中的条目,可供着色器采样。此表的最大大小是依赖于实现的限制,可以使用glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS,...)查询。符合标准的ES 2.0实现的保证最小值为8。
You're using texture ids correctly while creating your textures, by generating an id with glGenTextures()
, binding it with glBindTexture()
, and then setting up the texture.
通过使用glGenTextures()生成id,将其与glBindTexture()绑定,然后设置纹理,您可以在创建纹理时正确使用纹理ID。
The problem is where you set up the textures for drawing:
问题在于您为绘图设置纹理:
GLES20.glUniform1i(u_texture, ms.getTextureId());
The value of the sampler uniform is not a texture id, it is the index of a texture unit. You then need to bind the texture you want to use to the texture unit you specify.
采样器均匀值不是纹理id,而是纹理单元的索引。然后,您需要将要使用的纹理绑定到指定的纹理单元。
Using texture unit 0, the correct code looks like this:
使用纹理单元0,正确的代码如下所示:
GLES20.glUniform1i(u_texture, 0);
GLES20.glActiveTexture(GL_TEXTURE0);
GLES20.glBindTexture(ms.getTextureId());
A few remarks on this code sequence:
关于此代码序列的一些评论:
- Note that the uniform value is the index of the texture unit (
0
), while the argument ofglActiveTexture()
is the corresponding enum (GL_TEXTURE0
). That's because... it was defined that way. Unfortunate API design, IMHO, but you just need to be aware of it. -
glBindTexture()
binds the texture to the currently active texture unit, so it needs to come afterglActiveTexture()
. - The
glActiveTexture()
call is not really needed if you only ever use one texture.GL_TEXTURE0
is the default value. I put it there to illustrate how the connection between texture unit and texture id is established.
注意,uniform值是纹理单元(0)的索引,而glActiveTexture()的参数是对应的枚举(GL_TEXTURE0)。那是因为......它是这样定义的。不幸的API设计,恕我直言,但你只需要了解它。
glBindTexture()将纹理绑定到当前活动的纹理单元,因此它需要在glActiveTexture()之后。
如果您只使用一个纹理,则不需要glActiveTexture()调用。 GL_TEXTURE0是默认值。我把它放在那里来说明纹理单元和纹理id之间的连接是如何建立的。
Multiple texture units are used if you want to sample multiple textures in the same shader.
如果要在同一着色器中对多个纹理进行采样,则使用多个纹理单位。
#2
To begin I'll point out some general things about OpenGL:
首先,我将指出一些关于OpenGL的一般事项:
Each texture is a large square image. Loading that image into the gpu's memory takes time, as in you can't actively swap images into gpu's texture memory and hope for a fast run time.
每个纹理都是一个大的方形图像。将该图像加载到gpu的内存中需要花费时间,因为您无法主动将图像交换到gpu的纹理内存中,并希望快速运行。
Q1: The reason only the second image is showing is because of this line in your sprite class:
Q1:只显示第二张图片的原因是因为你的精灵类中的这一行:
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bmp, 0);
You call that twice, therefore texture0 is replaced by the second image, and only that image is called.
你调用两次,因此texture0被第二个图像替换,只调用该图像。
To combat this, developers load a single image that contains a lot of smaller images in it, aka a texture map. The size of the image that can be loaded largely depends on the gpu. Android devices range roughly from 1024^2 pixels to 4096^2 pixels.
为了解决这个问题,开发人员加载了一个包含大量较小图像的图像,即纹理贴图。可以加载的图像大小很大程度上取决于gpu。 Android设备的范围大约从1024 ^ 2像素到4096 ^ 2像素。
To use a smaller part of the texture for a sprite, you have to manually define the uvArray that is in your batcher class.
要为精灵使用较小部分纹理,您必须手动定义batcher类中的uvArray。
Let's imagine our texture has 4 images divided as follows:
让我们假设我们的纹理有4个图像分为如下:
(0.0, 0.0) top left _____ (1.0, 0.0) top right
|__|__| middle of the square is (0.5, 0.5) middle
(0.0, 1.0) bot left |__|__|(1.0, 1.0) bot right
That means the uv values for the top left image are:
这意味着左上角图像的uv值为:
static float[] uvArray = new float[]{
0.0f, 0.0f, //top left
0.0f, 0.5f, //bot left
0.5f, 0.5f, //bot right
0.5f, 0.0f //top right
};
This way you just quadrupled the amount of sprites you can have on a texture.
这样,您只需将纹理上的精灵数量增加四倍。
Because of this you will have to pass no only which texture the sprite is on, but also it's custom uvs that the batcher should use.
因此,你不仅要传递精灵所在的纹理,还要传递它应该使用的自定义uvs。