Skip to content

Instantly share code, notes, and snippets.

@literallylara
Last active April 27, 2022 19:18
Show Gist options
  • Save literallylara/064b5dd878d803d6422a96cc046ea33f to your computer and use it in GitHub Desktop.
Save literallylara/064b5dd878d803d6422a96cc046ea33f to your computer and use it in GitHub Desktop.

Index

Preface

Together with HellMood we won this year's (2016) JS1K competition and thought this might be a good opportunity to write about the development process and my motivation behind it. If you're already familiar with JS1K, feel free to skip the next two paragraphs.

JS1K is an annual code golfing competition, where participants are encouraged to submit something visually or otherwise impressive in under 1KB of JavaScript. The code will be run inside a shim that provides access to basic elements and context objects (b = body, d = document, c = canvas, g = webgl), applies various polyfills and optionally resizes the canvas, making it easier for the participants to focus on their demo without having to worry about setup routines and browser compatibility.

The competition was brought into existence in 2010 by Peter van der Zee and a number of mind-boggling demos have emerged since then. It was about 4 years ago when I first stumbled upon the site. Back then, I had only just begun playing around with computer generated graphics and was eager to learn more about the various techniques that existed. In 2013 I submitted three simple "demos" and continued to participate the following years as I gained more and more knowledge.


This year I worked on a total of 4 demos, with one of them being an improvement of my last year's submission:

When I submitted Romanesco 2.0, it originally looked like this. There was no sound and the colours were not particularly pleasing to look at. To my surprise, many people were already impressed with this version and shortly thereafter, demoscener HellMood contacted me, asking for a collaboration with him being responsible for the sound. 200 emails and several drafts later, this eventually lead to the last demo listed above.

The compression techniques that I learned from HellMood later allowed me to update Romanesco 2.0 and do some adjustments. I decided to let the colours match this year's theme ("Elemental") more closely and still had enough bytes left for some interactive sounds, where HellMood once again did his magic. Now the demo felt much more interesting and mystical and ended up ranking #1 in the competition. The rest of this post will try to explain this demo in detail.

The Setup

The setup for a WebGL raymarcher is pretty straight forward. Create a vertex shader with two triangles forming a rectangle, and a fragment shader containing the raymarcher which operates on the individual pixels that make up the rectangle. Screen resolution, mouse position and time will be supplied using uniform variables. 16 different functions and 6 constants are needed to achieve this:

attachShader bindBuffer bufferData compileShader createBuffer createProgram createShader drawArrays enableVertexAttribArray getUniformLocation linkProgram shaderSource uniform1f uniform2f useProgram vertexAttribPointer ARRAY_BUFFER BYTE FRAGMENT_SHADER STATIC_DRAW TRIANGLE_FAN VERTEX_SHADER

A common trick to get rid of long function names is to loop through the correspondig object and generate unique symbols for each entry. The WebGL constants can simply be replaced by their numeric representation. Siorki's RegPack does that automatically in its preprocessing step. When using a compressing algorithm it is important to structure your code in such a way that you end up with as many reoccurring patterns as possible.

Apart from the buffer data, there wasn't much left to optimize for the setup.
The buffer data to form a rectangle usually looks like this:

new Float32Array([-1,-1,1,-1,-1,1,1,-1,1,1,-1,1])
(bottom-left, bottom-right, top-left, bottom-right, top-right, top-left)

Which you would then draw using gl.TRIANGLE_STRIP.
However, since we are only dealing with integers, we can use an Int8Array instead and save 3 bytes:

new Int8Array([-1,-1,1,-1,-1,1,1,-1,1,1,-1,1])

Mh, wait a second, why use gl.TRIANGLE_STRIP when there is gl.TRIANGLE_FAN?

new Int8Array([-1,-1,-1,1,1,1,1,-1])
(bottom-left, top-left, top-right, bottom-right)

In fact, we don't even need to draw two triangles, one giant triangle will do the job just fine:

new Int8Array([-3,1,1,-3,1,1])

Another thing that we can get rid of is the gl.viewport() call, for some reason browsers don't seem to care and use the entire canvas by default. Mobile browsers don't seem to like this though.

The final setup for a shader with one uniform variable is shown below. I left the constants in their normal form to make it a little easier to read.

P=g.createProgram()
g.shaderSource(A=g.createShader(gl.VERTEX_SHADER),"attribute vec2 P;void main(){gl_Position=vec4(P,0,1);}")
g.compileShader(A);g.attachShader(P,A)
g.shaderSource(A=g.createShader(gl.FRAGMENT_SHADER),"precision mediump float;uniform float T;void main(){...}")
g.compileShader(A);g.attachShader(P,A)
g.linkProgram(P);g.useProgram(P)
g.bindBuffer(A=gl.ARRAY_BUFFER,g.createBuffer())
g.enableVertexAttribArray(0)
g.vertexAttribPointer(0,2,gl.BYTE,0,0,0)
g.bufferData(A,new Int8Array([-3,1,1,-3,1,1]),gl.STATIC_DRAW)
setInterval('g.uniform1f(g.getUniformLocation(P,"T"),A+=.01);g.drawArrays(gl.TRIANGLE_FAN,0,3)',16)

The Shader

The minified and pattern-optimized version of the shader looks as follows:

precision mediump float;uniform vec2 R,M;uniform float T; float t=5e-3;void main(){for( float i=0.;i<64.;i++){vec3 p=vec3((2.*gl_FragCoord.xy-R)/R.yy,t-1.),b=vec3(.707,.707,0); float a=T;p.xz*=mat2(cos(a),-sin(a),sin(a),cos(a));for( float i=0.;i<20.;i++){ a=(M/R*6.).x;p.xz*=mat2(cos(a),-sin(a),sin(a),cos(a)); a=(M/R*6.).y;p.xy*=mat2(cos(a),-sin(a),sin(a),cos(a));p-=min(0.,dot(p,b))*b*2.;b=b.zxx;p-=min(0.,dot(p,b))*b*2.;b=b.zxz;p-=min(0.,dot(p,b))*b*2.;b=b.xxy;p=p*1.5-.25;}t+=length(p)/3325.;if(length(p)/3325.<5e-3||t>2.){b=vec3(1);p*=.5;gl_FragColor=vec4(p/length(p)*(t<2.?5./i:i/64.),dot(p,b));break;}}}

You might have noticed that we precede every occurrence of float with a space, this is simply a continuation of the pattern that is already present in the first statement (precision mediump float) where the space can't be omitted. Similarly we use the same rotation matrix over and over again instead of using a function or #define macro.

Now to explain the details, I ran the code above through a beautifier, split it up into chunks, and commented on them individually. If you're not familiar with the concept of raymaching, I suggest skimming over the first part of this article first. It is fairly easy to understand provided you have a basic understanding of vector arithmetic.

01 | precision mediump float;
02 |
03 | uniform vec2 R, M; // resolution, mouse position
04 | uniform float T;   // time
05 | 
06 | float t = 5e-3;
07 | 
08 | void main()
09 | {
10 |     // Raymarching loop with a maximum of 64 steps
11 |     for (float i = 0.; i < 64.; i++)
12 |     {

The ray direction that we will march towards is vec3(0,0,1), a vector pointing right into the screen. Normally you would specify a target and calculate the direction as normalize(target-origin), but a simple orthographic projection will suffice for this scene.

The ray origin will be different for each pixel, and is calculated as:

(2.0 * gl_FragCoord.xy - R) / R.yy

which is equal to:

(gl_FragCoord.xy / R - 0.5) * 2.0 * vec2(R.x/R.y,1)

and simply transforms the pixel coordinates into a -1 to +1 range and fixes the aspect ratio.

The z-component of our origin will be -1, this allows us to place our objects right at the origin of the coordinate system without having to translate them first which also simplifies some equations as we will see later. The variable p below is the combined form of origin + direction * t and essentially represents our final ray. t will be increased for every iteration by the distance to the closest object in the scene.

13 |         vec3 p = vec3((2. * gl_FragCoord.xy - R) / R.yy, t - 1.),
14 |              b = vec3(.707, .707, 0);

b is a vector storing one of the symmetry planes of the tetrahedron in normalized form, which are:

vec3(1,1,0) / √2
vec3(0,1,1) / √2
vec3(1,0,1) / √2

"Why the tetrahedron?", you might ask. It turns out to be the simplest geometric solid (polyhedron) you can construct in 3 dimensions (in terms of faces, edges and vertices) and is generally just a good base for fractal shapes. If you open the demo and move your mouse to the top left corner, you will see the tetrahedron without any rotational transformations applied.

15 |         // Here we a apply a time dependent rotation around the Y-axis
16 |         float a = T; p.xz *= mat2(cos(a), -sin(a), sin(a), cos(a));
17 |
18 |         // Fractal iteration loop
19 |         for (float i = 0.; i < 20.; i++)
20 |         {
21 |             // Rotating the space again around the Y- and Z-axis,
22 |             // the angle depends on the mouse position this time
23 |             // and ranges from 0 to (almost) 2π
25 |             a = (M / R * 6.).x; p.xz *= mat2(cos(a), -sin(a), sin(a), cos(a));
26 |             a = (M / R * 6.).y; p.xy *= mat2(cos(a), -sin(a), sin(a), cos(a));

Now comes the essential part: folding/mirroring space around the symmetry planes of the tetrahedron. In conjunction with the rotations above, this is what gives the fractal its unique shape. The formula for mirroring a point around an arbitrary plane normal can be derived quite easily. All we need to do is: find the shortest distance between the point in space and the plane, scale the normal by that distance and add it twice to our point.

One way to do this, is to describe a property that we already know to be true for this particular case: when we scale the plane normal n by a scalar s, add it to the point vector and subtract the plane origin vector o, there exists a solution where the resulting vector is perpendicular to the plane normal n and the dot product between the two will thus be equal zero. Since s is the only unknown, the equation can be solved as follows (note that stands for the dot product):

    [(p + n * s) - o] ⋅ n = 0 
p ⋅ n + n ⋅ n * s - o ⋅ n = 0             | + o ⋅ n
        p ⋅ n + n ⋅ n * s = o ⋅ n         | - p ⋅ n
                n ⋅ n * s = o ⋅ n - p ⋅ n | ÷ n ⋅ n
---------------------------------------------------
                        s = (o ⋅ n - p ⋅ n) ÷ n ⋅ n

Now that we know the distance, we can construct the mirroring formula I described earlier: p += 2 * n * s

By choosing our plane origin to lie on the origin of our coordinate system, it becomes a null-vector and s simplifies to: - p ⋅ n ÷ n ⋅ n. Since our plane symmetry vector is normalized, n ⋅ n will be equal to 1 and s just becomes: - p ⋅ n.

Plugging that into our formula gives us:

p += 2 * n * (- p ⋅ n)

or in GLSL syntax:

p += 2.0 * n * dot(-p,n)

which can be simplified to:

p -= 2.0 * n * dot(p,n)

Now there is just one more case to consider: if the point already lies on the opposite side of the plane, we don't want to mirror it again. To do this, we simply limit the distance to only negative values using the min() function. The next three lines apply the mirroring operation for each symmetry plane. The corresponding normal vector is being constructed by rearranging b's components using GLSL's swizzling syntax.

27 |             p -= min(0., dot(p, b)) * b * 2.; b = b.zxx;
28 |             p -= min(0., dot(p, b)) * b * 2.; b = b.zxz;
29 |             p -= min(0., dot(p, b)) * b * 2.; b = b.xxy;

It is worth noting that the above can also be expressed using if-statements and essentially behaves just like the abs() function:

if(p.x+p.y<0) p.xy = -p.yx;
if(p.y+p.z<0) p.zy = -p.yz;
if(p.x+p.z<0) p.xz = -p.zx;

Even though this looks shorter, the other approach is not only faster but compresses better due to the lines being almost identical apart from the last 5 characters. In hindsight the if-statements could probably be compressed similarly using a second variable, but then we wouldn't have gone through the fun math equations. ;)

The final operations that we need to apply to form our fractal are scaling and translating. Without the translation, all the points would eventually fold into themselves, forming one boring sphere. And without the scaling it wouldn't be a true fractal with endlessly repeating patterns.

30 |             p = p * 1.5 - .25;
31 |         }

The gif below demonstrates how different scaling parameters affect the shape of the fractal.
You can play around with the code on Shadertoy.

By playing around with the symmetry planes and translation parameters, we can generate even funnier shapes:

Since we scaled the entire space in the process of generating the fractal, we need to "undo" the scaling before we calculate the distance. In this case 3325. is an approximation of pow(scaling,iterations) or 1.5^20. When the calculated distance is either close enough to zero or we travelled too far, we can finally colour the pixel and exit the loop:

32 |         t += length(p) / 3325.;
33 |
34 |         if (length(p) / 3325. < 5e-3 || t > 2.)
35 |         {
36 |             b = vec3(1); p *= .5;
37 |             gl_FragColor = vec4(p / length(p) * (t < 2. ? 5. / i : i / 64.), dot(p, b));
38 |             break;
39 |         }
40 |     }
41 | }

The deformed space coordinates stored in p give us a nice colour gradient, which transition smoothly with the rotation angles. To make things look more realistic, we add fake ambient occlusion by utilizing the amount of steps that were required to reach the fractal at a certain position (stored in the variable i). With more objects, or parts of an object, getting close to the ray on its way to its destination, more marching steps will be required.

Lastly, we can even add some fake specular lighting by adding all the components of the position vector and assigning that value to the alpha channel. I'm sure there is a mathematical explanation for why this works, though I only discovered that one by accident.

The Sound

For the sound we utilized the Web Audio API. Since we wanted the sound to be dynamic, the ScriptProcessorNode was exactly what we needed. For demos with static music there is also the WAVE PCM approach. I had a little discussion with @p01 regarding which of the two would compress better, in the end it really depends on your demo and the intentions.

The variable b below contains the vertical mouse position in pixels.

f = new AudioContext();
a = f.createScriptProcessor(8192,1,1);
a.connect(f.destination);

a.onaudioprocess = function(e)
{
    q = e.outputBuffer.getChannelData(0);

    for(i=8192;i--;)
    {
        q[i] *= .8;

        for(j=7;--j;) q[i] += Math.sin( i*(A*50%j)*Math.floor(b/150)*j/326 ) *
                              Math.sin( i/2606 ) *
                              Math.sin( A )/j/4;
    }
}

And that sums up the contents of our demo. Running the entire code through RegPack, this is what we end up with:

I hope you learned a thing or two and would like to thank:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment