Skip to content

Instantly share code, notes, and snippets.

@paniq
Last active August 29, 2020 23:46
Show Gist options
  • Star 7 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save paniq/002ea2cc587b91ee02a2 to your computer and use it in GitHub Desktop.
Save paniq/002ea2cc587b91ee02a2 to your computer and use it in GitHub Desktop.
> You'll really have to explain what you are up to, because it's been months we see experiments of yours
> doing ??? arithmetic, but we still don't know what any of these things are. man :) Would you write
> somewhere what all these graphs are about? Please....? ^__^
Oh this is just to confuse y'all, there's no point to any of this.
Just kidding.
The point of my XYZ-arithmetic prototypes is to explore & explain various techniques to speed up root finding
over simple sphere tracing, that is, find the surface that a ray hits much faster than SDF's do, but keep
the composable nature of arithmetic operations - something that classical raytracing does not support, because
it either expects you to analytically intersect with a set of predefined primitives, or it operates on
triangles or quads directly, expecting you to feed it with buffered data.
The approaches I tried were all based on the insight that finding surfaces is essentially a root finding problem.
The first technique I tried was interval arithmetic, where you provide a ray interval t0-t1 instead of a
single t scalar, and likewise receive a lower and upper bound on the computed distance value. If the bounds
contain a zero, then the section is repeatedly bisected and intervals outside or inside the surface thrown
away until the desired precision has been achieved. IA unfortunately only gains one bit of precision per
bisection and has false positives, requiring a stack, but it often converges faster than SDF, particularly
when it comes to grazing surfaces.
The second technique I tried was revised affine arithmetic, where the ray interval is understood as
a line going through the points t0-t1 with a gradient of 1, and is transformed into a parallelogram that
is somewhat of a 1D oriented bounding box around the possible distance values. If that bounding box
crosses the line f(x) = 0, then the interval of the intersection is bisected and searched again until
the error bounds are small enough. AA converges faster than IA, as the truncations allow to make
better educated guesses to where the intersection point is, and AA can optimally track linear operations
with zero error, which severely improves on the grazing surfaces issue. It also requires a stack tho.
Unfortunately the pointy edges of the interval parallelograms produce many false positives, and so the
iteration count is not as fast as it could be. One can always increase the numbers of coefficients, but
then the computations become costlier faster than they help with better range culling.
So at this point I realized, they're all shit. Nothing is fast enough. Everything is terrible.
So I returned to SDF tracing and looked at the output. The only real problem is the grazing. So I thought,
might there be a way to detect grazing and make larger steps in those areas, reliably, without missing
a potential root?
Then I discovered two things:
1. If one tracked derivatives (wee, a use case for automatic differentiation!), one could easily use
-f(t)/f'(t) to predict the position of the next root for purely convex functions (so, spheres, boxes,
any object with purely positive curvature will work). As soon as the distance went negative, one
could quit, as there would be no more roots to discover. [It turns out, this technique is a
combination of the Newton-Raphson method with gradient descent, and it's a classic technique
for root finding] Unfortunately this method does not work for inverted spheres and other concave
functions. It basically comes down to not being able to get out of the valley of a quadratic
function between its two roots.
2. If one tracked introductions of discontinuities in f(x), then one could give an upper bound
of how far a concave function was allowed to skip ahead. The closest discontinuity greater than t
could be tracked through special implementations of abs(), min(), max() and square(). It basically
allows to get out of a local root region and skip forward to the next one, or in other words:
implicitly define a ray function as a series of patches of which each patch may or may not
contain one root. Unfortunately in order to be able to do that, particularly with square(), one
needs to know the next root *perfectly*. Because Newton-Raphson (and other methods) can only
approximate, a call to abs(square(x)) would not be able to reliably find the discontinuity.
So, at this point, I already had a system that could fast-trace piecewise linear functions
such as planes and boxes merged with hard CSG ops, needing at maximum only as many iterations
as there were discontinuities (I call them horizons) in the function to find the surface.
But planar only is boring, so I looked into what was necessary to get spherical surfaces.
For quadratic operations, any sort of approximator was out of the question. So I considered
a closed-form solution: if one carefully restricted the set of valid arithmetic operations,
then one could give strong guarantees about root finding. E.g. if a function is a
combination of calls to a+b/a*const/abs(a)/min(a,b)/max(a,b) and square(a) only, and square
wasn't called recursively so that f'''(x)=0 was guaranteed, then, knowing input value t
and tracking the derivatives f'(x) and f''(x), one could perfectly predict the position of
the closest root greater than t.
Then I realized that at this point any function would either be piecewise linear or piecewise
quadratic, and so it would be more useful to store the coefficients for a0 + a1*t + a2*t² = 0
rather than computing derivatives, as the polynomial form would ensure that invalid operations
were not possible, and still provide a cheap way to compute the derivatives. This is what
polynomial arithmetic is about. All arithmetic operations now transform the coefficients and
then the quadratic solve is used to find the roots, all without a stack.
While it permits root finding in the least possible steps for an arithmetic form, there are
some restrictions associated with this:
1. sqrt() can not be used as it does not permit chaining of operations and makes perfect solving
impossible. That means our functions are no longer valid distance functions. On the upside,
tracing an ellipsoid is now remarkably easy, as one can just use the classic implicit function here.
It translates to one transcendental function less. In fact pure polynomial arithmetic only involves flops.
2. Number of coefficients is bounded by the desired degree of the polynomial. A degree n polynomial
requires (n+1) coefficients. At degree 1, only linear operations are permitted, all surfaces are
planar and curvature is infinite everywhere (except at discontinuities). At degree 2, spheres,
cylinders and ellipsoids can be described, as well as piecewise spherical and ellipsoidical parts.
At degree 4, toroids and piecewise toroids can be described.
3. While additions and subtraction of polynomials of degree n introduce no change in complexity
(in fact, it's amazingly straightforward), multiplication of two polynomials of degree n and m
creates a new polynomial of degree (n+m), overflowing the number of available coefficients.
With this restriction it is possible to apply square() to a linear function (degrees 1+1 = degree 2),
but applying square() to a quadratic polynomial creates a quartic polynomial. That means a
smooth-min function can only be applied once to two planar surfaces, and then never again.
Upgrading to degree 4 does not help much: now it can be applied twice to join four planar surfaces,
or two times to join two spherical surfaces, and then it's game over again. It becomes therefore
very important to be able to approximate higher degree polynomials with piecewise defined
polynomials of lower degree. I have not investigated much in this direction yet.
The polynomial arithmetic solution is basically a technique to raytrace implicit surfaces. Every
iteration either presents you with a root or a discontinuity, and so convergence is optimal. This
frees up so many resources that tackling some challenges that distance fields can not do well
become now feasible to do in a shader: complex fog volumes, refraction, subsurface scattering,
automatic transparency sorting.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment