在OpenGL上为迭代DE解算器编写和读取相同的纹理

时间:2021-01-23 22:45:51

I am trying to write a fluid simulator that requires solving iteratively some differential equations (Lattice-Boltzmann Method). I want it to be a real-time graphical visualisation using OpenGL. I ran into a problem. I use a shader to perform relevant calculations on GPU. What I what is to pass the texture describing the state of the system at time t into the shader, shader performs the calculation and returns the state of the system at time t+dt, I render the texture on a quad and then pass the texture back into the shader. However, I found that I can not read and write to the same texture at the same time. But I am sure I have seen implementations of such calculations on GPU. How do they work around it? I think I saw a few discussion on a different way of working around the fact that OpenGL can read and write the same texture, but I could not quite understand them and adapt them to my case. To render to texture I use: glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, renderedTexture, 0);

我正在尝试编写一个流体模拟器,需要迭代求解一些微分方程(Lattice-Boltzmann方法)。我希望它是使用OpenGL的实时图形可视化。我遇到了一个问题。我使用着色器在GPU上执行相关计算。我什么是将描述系统状态的纹理在时间t传递到着色器中,着色器执行计算并在时间t + dt返回系统状态,我在四边形上渲染纹理然后传递纹理回到着色器。但是,我发现我无法同时读取和写入相同的纹理。但我相信我已经在GPU上看到过这种计算的实现。他们如何解决这个问题?我想我看到了一些关于OpenGL可以读写相同纹理的不同方法的讨论,但我无法理解它们并使它们适应我的情况。为了渲染纹理我使用:glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,renderedTexture,0);

Here is my rendering routine:

这是我的渲染例程:

do{


    //count frames
    frame_counter++;


    // Render to our framebuffer
    glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);
    glViewport(0,0,windowWidth,windowHeight); // Render on the whole framebuffer, complete from the lower left corner to the upper right

    // Clear the screen
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    // Use our shader
    glUseProgram(programID);
    // Bind our texture in Texture Unit 0
    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_2D, renderTexture);


    glUniform1i(TextureID, 0);

    printf("Inv Width: %f", (float)1.0/windowWidth);
    //Pass inverse widths (put outside of the cycle in future)
    glUniform1f(invWidthID, (float)1.0/windowWidth);
    glUniform1f(invHeightID, (float)1.0/windowHeight);

    // 1rst attribute buffer : vertices
    glEnableVertexAttribArray(0);
    glBindBuffer(GL_ARRAY_BUFFER, quad_vertexbuffer);
    glVertexAttribPointer(
                          0,                  // attribute 0. No particular reason for 0, but must match the layout in the shader.
                          3,                  // size
                          GL_FLOAT,           // type
                          GL_FALSE,           // normalized?
                          0,                  // stride
                          (void*)0            // array buffer offset
                          );

    // Draw the triangles !
    glDrawArrays(GL_TRIANGLES, 0, 6); // 2*3 indices starting at 0 -> 2 triangles

    glDisableVertexAttribArray(0);
    // Render to the screen
    glBindFramebuffer(GL_FRAMEBUFFER, 0);
    // Render on the whole framebuffer, complete from the lower left corner to the upper right
    glViewport(0,0,windowWidth,windowHeight);

    // Clear the screen
    glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    // Use our shader
    glUseProgram(quad_programID);

    // Bind our texture in Texture Unit 0
    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_2D, renderedTexture);
    // Set our "renderedTexture" sampler to user Texture Unit 0
    glUniform1i(texID, 0);

    glUniform1f(timeID, (float)(glfwGetTime()*10.0f) );

    // 1rst attribute buffer : vertices
    glEnableVertexAttribArray(0);
    glBindBuffer(GL_ARRAY_BUFFER, quad_vertexbuffer);
    glVertexAttribPointer(
                          0,                  // attribute 0. No particular reason for 0, but must match the layout in the shader.
                          3,                  // size
                          GL_FLOAT,           // type
                          GL_FALSE,           // normalized?
                          0,                  // stride
                          (void*)0            // array buffer offset
                          );

    // Draw the triangles !
    glDrawArrays(GL_TRIANGLES, 0, 6); // 2*3 indices starting at 0 -> 2 triangles

    glDisableVertexAttribArray(0);

    glReadBuffer(GL_BACK);
    glBindTexture(GL_TEXTURE_2D, sourceTexture);
    glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 0, 0, windowWidth, windowHeight, 0);


    // Swap buffers
    glfwSwapBuffers(window);
    glfwPollEvents();



}

What happens now, is that when I render to the framebuffer, I the texture I get as an input is empty, I think. But when I render the same texture on screen, it renders succesfully what I excpect.

现在发生的事情是,当我渲染到帧缓冲区时,我认为作为输入的纹理是空的,我想。但是当我在屏幕上渲染相同的纹理时,它会成功地呈现我所感受到的。

1 个解决方案

#1


0  

Okay, I think I've managed to figure something out. Instead of rendering to a framebuffer what I can do is to use glCopyTexImage2D to copy whatever got rendered on the screen to a texture. Now, however, I have another issue: I can't understand if glCopyTexImage2D will work with a frame buffer. It works with onscreen rendering, but I am failing to get it to work when I am rendering to a framebuffer. Not sure if this is even possible in the first place. Made a separate question on this: Does glCopyTexImage2D work when rendering offscreen?

好吧,我想我已经设法解决了一些问题。我可以做的不是渲染到帧缓冲区,而是使用glCopyTexImage2D将屏幕上渲染的内容复制到纹理中。然而,现在,我有另一个问题:我无法理解glCopyTexImage2D是否可以使用帧缓冲区。它适用于屏幕渲染,但是当我渲染到帧缓冲区时,我无法使其工作。首先不确定这是否可行。对此提出了一个单独的问题:glCopyTexImage2D在屏幕外渲染时是否有效?

#1


0  

Okay, I think I've managed to figure something out. Instead of rendering to a framebuffer what I can do is to use glCopyTexImage2D to copy whatever got rendered on the screen to a texture. Now, however, I have another issue: I can't understand if glCopyTexImage2D will work with a frame buffer. It works with onscreen rendering, but I am failing to get it to work when I am rendering to a framebuffer. Not sure if this is even possible in the first place. Made a separate question on this: Does glCopyTexImage2D work when rendering offscreen?

好吧,我想我已经设法解决了一些问题。我可以做的不是渲染到帧缓冲区,而是使用glCopyTexImage2D将屏幕上渲染的内容复制到纹理中。然而,现在,我有另一个问题:我无法理解glCopyTexImage2D是否可以使用帧缓冲区。它适用于屏幕渲染,但是当我渲染到帧缓冲区时,我无法使其工作。首先不确定这是否可行。对此提出了一个单独的问题:glCopyTexImage2D在屏幕外渲染时是否有效?