Skip to content

Instantly share code, notes, and snippets.

@rygorous
Last active December 15, 2015 05:39
Show Gist options
  • Save rygorous/5210848 to your computer and use it in GitHub Desktop.
Save rygorous/5210848 to your computer and use it in GitHub Desktop.
Weird rendering problem
Weird rendering problem:
We need to render a 3D object such that the z values getting passed on to depth test/write for all pixels
are all exactly the same value (constant per batch), and we need to be able to choose that value freely.
This is what we'd like to do, but it doesn't work:
// at the end of the VS
out.pos.z = ourZValue * out.pos.w;
Because of round-off error, this is only *approximately* the same value at all vertices, not exactly the
same like we need.
Here's the ways we've come up with to solve the problem:
1. Do the perspective divide in the vertex shader
// at the end of the VS
float oneOverW = 1.0f / out.pos.w;
out.pos.xy *= oneOverW;
out.pos.z = ourZValue;
our.pos.w = 1.0f;
With this, we can exactly control the depth value that gets written, but we lose perspective
correction for interpolated quantities. We could pass multiply all attributes by oneOverW, pass
oneOverW as extra attribute, and then do the perspective interpolation ourselves in the pixel shader,
so now we need every pixel shader to be specialized for this, and we do manual perspective correction.
Ugh.
2. Pass ourZValue to the pixel shader (as constant / attribute), write it to oDepth.
This is reasonably straightforward, but it involves writes to oDepth, and again having variants of the
pixel shaders that do this. This is less "ugh" in terms of amount of code but still requires having
basically 2x the pixel shaders and lots of ugly code paths.
3. Massive depth bias abuse.
We set ourZValue = 0 - this always ends up exact. Then, we set the actual Z value we want as a depth
bias. This is nice in that it involves absolutely no modifications to any of the shaders, it's just a
weird projection matrix we send to the VS with a z=0 row. It should also work fine with most rendering
APIs we support.
The problem is that on D3D10+, the depth bias is part of the rasterizer state, and in our case it
changes per batch. So we'd probably end up creating (and destroying) a bunch of rasterizer state objects
per frame. This is fairly iffy.
4. Massive depth range / viewport abuse
Set a depth range that has both the min and max end at ourZValue. Now, no matter what the VS outputs, we
get ourZValue back, or at least should in theory!
But now we're calling glDepthRange (GL) or *SetViewport (D3D) for all affected batches. There's no reason
this cannot be fast - but it's extremely weird so I also wouldn't be surprised if it's a slow path
regardless.
5. ???
If you have other ideas, please ping me: @rygorous on Twitter!
@darksylinc
Copy link

After realizing my above post is embarrassingly illegible, I'll try to explain the idea in code (assuming IEEE 32-bit):

float z = 0.1f; //Desired value
float w = 11.0f;
float outZ = (z + 83886.0f) * w;
float zBack = (outZ / w) - 83886.0f;

'zBack' prints 0.101562500 nothing new. My point here is that the chance of "zBack" printing values other than '0.101562500' should be lower because outZ was a big number, with few decimal places left, if my floating point knowledge doesn't fail me (there's a big possibility it does, :P ).
Even if we change "z = 0.105", zBack still reads the same value. Even if outZ is incremented by 0.025, zBack still reads the same value (in binary).

It is wishfull thinking because it could embarrassingly fail because of how the gpu interpolates the depth during rasterization (you know that much more than I do). And even if all that is true, I'm not sure whether the depth bias is operated after or before checking depth is within clipping range.

Shall this work, the advantage would be that you can use the same large depth bias for all batches, and pass the depth as a constant to the vertex shader. The disadvantage is that you loose control over "zBack", you can just steer it towards some value, but not an exact one; it's not 100% reliable, which may not be what you're looking for.
Not to mention you have to account for HW with 24-bit floating point precision

@johnbartholomew
Copy link

Regarding your option 4, maybe you don't have to change the depth range for every batch. Perhaps you can adjust your coordinates and DepthRange so that the round-off error produced by the simple out.pos.z = ourZValue * out.pos.w gets quantised out?

Something like: you divide your depth buffer into ranges (0-255, 256-511, etc), set up the DepthRange to use the first 256 values, render 256 batches, then change DepthRange, render another 256 batches, and so on. You want your increment of ourZValue each time to be (conservatively) larger than round-off error, but then you use DepthRange to pack the final values into the depth buffer without gaps and cut off the noisy low bits.

No doubt the devil is in the details that I haven't worked out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment