
I'm implementing ominidirectional shadow mapping for point lights. I want to use a linear depth which will be stored in the color textures (cube map). A program will contain two filtering techniques: software pcf (because hardware pcf works only with depth textures) and variance shadow mapping. I found two ways of storing linear depth:
What are differences between them ? Are both ways correct ? For the standard shadow mapping with software pcf a shadow test will depend on the linear depth format. What about variance shadow mapping ? I implemented omnidirectional shadow mapping for points light using a non-linear depth and hardware pcf. In that case a shadow test looks like this:
I also implemented standard shadow mapping without pcf which using second format of linear depth: (Edit 1: i.e. distance to the light + some offset to fix shadow acne)
but I have no idea how to do that for the first format of linear depth. Is it possible ? Edit 2: For non-linear depth I used glPolygonOffset to fix shadow acne. For linear depth and distance to the light some offset should be add in the shader. I'm trying to implement standard shadow mapping without pcf using a linear depth (-viewSpace.z * linearDepthConstant + offset) but following shadow test doesn't produce correct results:
How to fix that ? |
||||
add comment |
The method with You could store either one, but you have to be consistent about it. If you store distance instead of depth when you render your shadow map, you have to compare against distance instead of depth when you apply the shadow map. Depth is the natural choice as that's what will be generated by the GPU rasterizer when you draw the shadow map. It takes extra work to convert it to distance (you have to set up the |