My C++ code was designed for iOS and now I ported it to NDK with minimal modifications.
我的c++代码是为iOS设计的,现在我将它移植到NDK,并进行了最少的修改。
I bind frame buffer and call
我绑定帧缓冲和调用。
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
then I bind main frame buffer like this
然后像这样绑定主帧缓冲。
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glGetError()
returns GL_INVALID_FRAMEBUFFER_OPERATION
glGetError()返回GL_INVALID_FRAMEBUFFER_OPERATION
I can draw in my framebuffer and use its texture to draw it in main framebuffer. But I when I call glReadPixels
then I get zeros
我可以在framebuffer中绘制并使用它的纹理在主帧缓冲区中绘制它。但是当我调用glReadPixels时,我得到0。
That code worked in iOS and most works in Android except glReadPixels
该代码在iOS中工作,大多数在安卓系统中使用,除了glReadPixels。
glCheckFramebufferStatus(GL_FRAMEBUFFER)
returns GL_FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT 0x8CD7
glCheckFramebufferStatus(GL_FRAMEBUFFER)返回x8cd7 GL_FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT 0
--
- - -
I will consider sample code that will give me pixels data from framebuffer or texture that I can save to file as an answer
我将考虑示例代码,它将从framebuffer或纹理中给我像素数据,我可以将其保存到文件中作为答案。
Right now I can draw to buffer with attached texture and I can use that texture to draw on main buffer. But I can't get pixels from framebuffer/texture to save to file or post to facebook.
现在我可以用附加的纹理来绘制缓冲区,我可以使用这个纹理来绘制主缓冲区。但是,我无法从framebuffer/纹理中获得像素来保存文件或发布到facebook。
1 个解决方案
#1
2
I have been able to do a glReadPixels from a framebuffer modifying the NDK example "GL2JNIActivity".
我已经能够从framebuffer修改NDK示例“GL2JNIActivity”中做一个glReadPixels。
I just changed gl_code.cpp, where I have added an initialization function:
我只是改变了gl_code。cpp,我添加了一个初始化函数:
char *pixel = NULL;
GLuint fbuffer, rbuffers[2];
int width, height;
bool setupMyStuff(int w, int h) {
// Allocate the memory
if(pixel != NULL) free(pixel);
pixel = (char *) calloc(w * h * 4, sizeof(char)); //4 components
if(pixel == NULL) {
LOGE("memory not allocated!");
return false;
}
// Create and bind the framebuffer
glGenFramebuffers(1, &fbuffer);
checkGlError("glGenFramebuffer");
glBindFramebuffer(GL_FRAMEBUFFER, fbuffer);
checkGlError("glBindFramebuffer");
glGenRenderbuffers(2, rbuffers);
checkGlError("glGenRenderbuffers");
glBindRenderbuffer(GL_RENDERBUFFER, rbuffers[0]);
checkGlError("glGenFramebuffer[color]");
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB565, w, h);
checkGlError("glGenFramebuffer[color]");
glBindRenderbuffer(GL_RENDERBUFFER, rbuffers[1]);
checkGlError("glGenFramebuffer[depth]");
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, w, h);
checkGlError("glGenFramebuffer[depth]");
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rbuffers[0]);
checkGlError("glGenFramebuffer[color]");
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rbuffers[1]);
checkGlError("glGenFramebuffer[depth]");
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
LOGE("Framebuffer not complete");
return false;
}
// Turn this on to confront the results in pixels when the fb is active with the ones obtained from the screen
if(false) {
glBindFramebuffer(GL_FRAMEBUFFER, 0);
checkGlError("glBindFramebuffer");
}
width = w;
height = h;
return true;
}
which does nothing more than initializing the framebuffer and allocating the space for the output, and which I call at the very end of
除了初始化framebuffer并为输出分配空间之外,什么都不做,我在最后调用的是什么?
bool setupGraphics(int w, int h);
and a block to save the output in my pixels array (which I obviously execute after the
还有一个块来保存我的像素数组中的输出(我显然是在后面执行的。
checkGlError("glDrawArrays");
statement in renderFrame():
声明renderFrame():
{
// save the output of glReadPixels somewhere... I'm a noob about JNI however, so I leave this part as an exercise for the reader ;-)
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixel);
checkGlError("glReadPixel(0, 0, w, h, GL_RGB, GL_UNSIGNED_BYTE, pixel)");
int end = width * height * 4 - 1;
// This function returns the same results regardless of where the data comes from (screen or fbo)
LOGI("glReadPixel => (%hhu,%hhu,%hhu,%hhu),(%hhu,%hhu,%hhu,%hhu),(%hhu,%hhu,%hhu,%hhu),..., (%hhu, %hhu, %hhu, %hhu)",
pixel[0], pixel[1], pixel[2], pixel[3],
pixel[4], pixel[5], pixel[6], pixel[7],
pixel[8], pixel[9], pixel[10], pixel[11],
pixel[end - 3], pixel[end - 2], pixel[end - 1], pixel[end]);
}
The resuls in logcat are identical, whether you draw on the framebuffer or on the screen:
无论您是在framebuffer还是在屏幕上绘制,logcat的结果都是相同的:
I/libgl2jni( 5246): glReadPixel => (0,4,0,255),(8,4,8,255),(0,4,0,255),..., (0, 4, 0, 255)
I/libgl2jni( 5246): glReadPixel => (8,4,8,255),(8,8,8,255),(0,4,0,255),..., (8, 8, 8, 255)
I/libgl2jni( 5246): glReadPixel => (8,8,8,255),(8,12,8,255),(8,8,8,255),..., (8, 8, 8, 255)
I/libgl2jni( 5246): glReadPixel => (8,12,8,255),(16,12,16,255),(8,12,8,255),..., (8, 12, 8, 255)
I/libgl2jni( 5246): glReadPixel => (16,12,16,255),(16,16,16,255),(8,12,8,255),..., (16, 16, 16, 255)
[...]
tested on Froyo (phone) and on a 4.0.3 tablet.
测试了Froyo(电话)和4.0.3平板电脑。
You can find all other details in the original NDK example (GL2JNIActivity).
您可以在原始NDK示例(GL2JNIActivity)中找到所有其他细节。
Hope this helps
希望这有助于
EDIT: also make sure to check:
编辑:也一定要检查:
Android NDK glReadPixels() from offscreen buffer
Android NDK glReadPixels()来自offscreen缓冲区。
where the poster seemed to have your same symptoms (and solved the problem calling glReadPixels from the right thread).
海报似乎有相同的症状(并解决了从右线程调用glReadPixels的问题)。
#1
2
I have been able to do a glReadPixels from a framebuffer modifying the NDK example "GL2JNIActivity".
我已经能够从framebuffer修改NDK示例“GL2JNIActivity”中做一个glReadPixels。
I just changed gl_code.cpp, where I have added an initialization function:
我只是改变了gl_code。cpp,我添加了一个初始化函数:
char *pixel = NULL;
GLuint fbuffer, rbuffers[2];
int width, height;
bool setupMyStuff(int w, int h) {
// Allocate the memory
if(pixel != NULL) free(pixel);
pixel = (char *) calloc(w * h * 4, sizeof(char)); //4 components
if(pixel == NULL) {
LOGE("memory not allocated!");
return false;
}
// Create and bind the framebuffer
glGenFramebuffers(1, &fbuffer);
checkGlError("glGenFramebuffer");
glBindFramebuffer(GL_FRAMEBUFFER, fbuffer);
checkGlError("glBindFramebuffer");
glGenRenderbuffers(2, rbuffers);
checkGlError("glGenRenderbuffers");
glBindRenderbuffer(GL_RENDERBUFFER, rbuffers[0]);
checkGlError("glGenFramebuffer[color]");
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB565, w, h);
checkGlError("glGenFramebuffer[color]");
glBindRenderbuffer(GL_RENDERBUFFER, rbuffers[1]);
checkGlError("glGenFramebuffer[depth]");
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, w, h);
checkGlError("glGenFramebuffer[depth]");
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rbuffers[0]);
checkGlError("glGenFramebuffer[color]");
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rbuffers[1]);
checkGlError("glGenFramebuffer[depth]");
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
LOGE("Framebuffer not complete");
return false;
}
// Turn this on to confront the results in pixels when the fb is active with the ones obtained from the screen
if(false) {
glBindFramebuffer(GL_FRAMEBUFFER, 0);
checkGlError("glBindFramebuffer");
}
width = w;
height = h;
return true;
}
which does nothing more than initializing the framebuffer and allocating the space for the output, and which I call at the very end of
除了初始化framebuffer并为输出分配空间之外,什么都不做,我在最后调用的是什么?
bool setupGraphics(int w, int h);
and a block to save the output in my pixels array (which I obviously execute after the
还有一个块来保存我的像素数组中的输出(我显然是在后面执行的。
checkGlError("glDrawArrays");
statement in renderFrame():
声明renderFrame():
{
// save the output of glReadPixels somewhere... I'm a noob about JNI however, so I leave this part as an exercise for the reader ;-)
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixel);
checkGlError("glReadPixel(0, 0, w, h, GL_RGB, GL_UNSIGNED_BYTE, pixel)");
int end = width * height * 4 - 1;
// This function returns the same results regardless of where the data comes from (screen or fbo)
LOGI("glReadPixel => (%hhu,%hhu,%hhu,%hhu),(%hhu,%hhu,%hhu,%hhu),(%hhu,%hhu,%hhu,%hhu),..., (%hhu, %hhu, %hhu, %hhu)",
pixel[0], pixel[1], pixel[2], pixel[3],
pixel[4], pixel[5], pixel[6], pixel[7],
pixel[8], pixel[9], pixel[10], pixel[11],
pixel[end - 3], pixel[end - 2], pixel[end - 1], pixel[end]);
}
The resuls in logcat are identical, whether you draw on the framebuffer or on the screen:
无论您是在framebuffer还是在屏幕上绘制,logcat的结果都是相同的:
I/libgl2jni( 5246): glReadPixel => (0,4,0,255),(8,4,8,255),(0,4,0,255),..., (0, 4, 0, 255)
I/libgl2jni( 5246): glReadPixel => (8,4,8,255),(8,8,8,255),(0,4,0,255),..., (8, 8, 8, 255)
I/libgl2jni( 5246): glReadPixel => (8,8,8,255),(8,12,8,255),(8,8,8,255),..., (8, 8, 8, 255)
I/libgl2jni( 5246): glReadPixel => (8,12,8,255),(16,12,16,255),(8,12,8,255),..., (8, 12, 8, 255)
I/libgl2jni( 5246): glReadPixel => (16,12,16,255),(16,16,16,255),(8,12,8,255),..., (16, 16, 16, 255)
[...]
tested on Froyo (phone) and on a 4.0.3 tablet.
测试了Froyo(电话)和4.0.3平板电脑。
You can find all other details in the original NDK example (GL2JNIActivity).
您可以在原始NDK示例(GL2JNIActivity)中找到所有其他细节。
Hope this helps
希望这有助于
EDIT: also make sure to check:
编辑:也一定要检查:
Android NDK glReadPixels() from offscreen buffer
Android NDK glReadPixels()来自offscreen缓冲区。
where the poster seemed to have your same symptoms (and solved the problem calling glReadPixels from the right thread).
海报似乎有相同的症状(并解决了从右线程调用glReadPixels的问题)。