"Any thoughts on pulling a depth texture directly from a forward camera after opaque passes rather than using the pre pass?" (https://twitter.com/bgolus/status/747967191278989312)
Yeah, so there's several things that might complicate this:
- Screenspace ("deferred") shadows. These need depth texture, but you want to receive shadows while rendering opaque objects. So a depth pre-pass is the only option there. What we could do: remove (or make optional) the screenspace shadows; just directly sample & blend shadow cascades inside the shader. Increases register pressure, but has quite some benefits too (MSAA "just works", can do receiver plane bias, just works on transparencies etc.)
- Various "not quite standard" cases, like splitscreen cameras or other types of cameras that don't render to the whole render target. Probably not a common case though; if you have shadows & need depth, you likely also have postprocessing etc. So you're not rendering directly into some backbuffer anyway.
- The dreaded coordinate space differences, just flat-out doing this change can potentially break existing content. Making it optional might work though.
Another quite stupid thing we do today: that separate depth pass is not actually a z-prepass, i.e. we sometimes do not use the depth buffer out of it for later rendering! This is primarily due to non-fullscreen cameras, coordinate differences and dynamic batching (with dynamic batching, it has to happen "consistently" in all rendering passes, otherwise your Z precision won't match). So yeah that one is stupid.
What's the plan then?
Currently the plan is doing a "Scriptable Render Loop". This like command buffers, extended to work on whole sets of objects. We want to ship a few ("deferred", "tiled forward", "super simple, one light low-end") with full source. But you could build your own too. Want to do actual depth prepass and use that as a texture? Go for it. Custom G-buffer? Same. etc etc
But first, vacation :)