Skip to content

Instantly share code, notes, and snippets.

@kebby
Last active August 12, 2023 21:05
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save kebby/e6ca7c40e276a30727f59de25d5c2dae to your computer and use it in GitHub Desktop.
Save kebby/e6ca7c40e276a30727f59de25d5c2dae to your computer and use it in GitHub Desktop.

How to properly Deadline Three-Dee compo

The anaglyph 3D compo has been a staple of Deadline for years now, and yet often enough there are entries where the 3D effect just doesn't work at the party, and what looked good on the creator's screen resulted in very confusing 3D and double images when watched at the party, or worse - people in the audience got headaches and had to take off their 3D glasses when the entry was on.

Most of this is because of pretty simple physics, and it mostly boils down to the fact that watching a stereoscopic image on a small screen is a very different thing than watching the same thing on a big projection screen.

How do we even see 3D?

Basically, the way the human visual system focuses on different distances is pretty simple. First, the lenses in your eyes need to set the focus to the distance of the object that you're watching, and second both of your eyes have to turn towards that object to put it into the center of both visual fields.

Watching stereoscopic content on a screen is already harder on the brain than just looking at any object in reality, because in order to do so, you need to decouple those two mechanisms. If you look at a screen, the "lens focus" always needs to be set to the distance between your eyes and the screen instead of the perceived distance of the object you're looking at - and that's already a thing the human mind doesn't ever need to do in our reality. The way our brains are programmed from our life experience, those two systems (focus and moving your eyes) are always completely coupled to each other, and suddenly you need to use them independently - your eyes need to turn towards where the perceived distance of the object is, but your lenses need to stay focused on the distance to the screen. How well the brain is able to adapt to that new thing it needs to do varies quite a lot between different persons, and it's the reason so many people can't watch 3D movies without getting headaches.

This is just a fact of our biology, and something you don't have control over. But this makes it all the more important to author your content in a way that it doesn't end up too far either in front or behind the screen, because the more it does, the more you strain the viewer's ability to actually look at what's happening.

What is way more important is how far you make the viewer turn their eyes though, because that's where a lot of entries just fall apart.

Depth and pupillary distance

When you look at any object, what's happening is that you turn your eyes slightly inward to make them both target what you're looking at. Imagine a head with two eyes and some object at some distance, and draw lines between the eyes, and each eye and the object. What you get is a triangle, and the angles at the eyes get steeper the closer the object is. But the most important part is that if you increase the distance between the head and the object towards infinity, those angles never get shallower than 90 degrees. If the object is infinitely far away, your "eye rays" are pretty much two parallel lines.

This is the point: The eyes will never turn more outward than perfectly straight. Anything else is just not physically possible, and as soon as the left and right images are further apart than the distance between your left and right eye, the brain will just nope out and instead of a depth effect you'll get two weird, flickery, hard to decipher images.

So this means that The distance between the left and right image on screen must never be bigger than the distance between the viewer's left and right eye. And this is why a lot of entries work on a computer screen but not at the party - the bigger the screen, the bigger the distance between the two images. What's a nice little 3D effect on your monitor becomes a physically impossible mess on the big screen really quick.

What this means is that you have to account for not only the screen size but also the smallest pupillary distance of anyone in your audience. From https://www.allaboutvision.com/eye-care/measure-pupillary-distance/ :

PD is measured in millimeters (mm). The average pupillary distance for an adult is about 63 mm, but this is not a number you’ll want to assume. Pupillary distance can vary widely — roughly between 51 mm and 74.5 mm for women and 53 mm and 77 mm for men.

So if we disregard any children in the audience, let's assume a generous 50mm here. But it's not even that - remember that you really want to limit your distance range around the screen so people can watch it without straining their eyes and minds too much, so honestly, it's more like 30 or 40 millimeters max.

So er, can you give me the TL:DR already?

Deadline 2022's big screen is about 5 meters wide. So if we assume a 1920 pixel wide screen mode, the maximum distance that the left and right image can realistically diverge is 1920px * 40mm / 5000mm, or about 15 pixels or more generally, 0.8% of your horizontal resolution. And of course it's way worse if you have a lower resolution than that. For example if you've got an entry on a C64, if we factor in 4:3, the border and everything, that's only about four pixels that the left and right image can be apart from each other before some people can't interpret it as 3D any more. Yes, it's that little.

All these calculations only apply to content that's supposed to end up behind the screen, and if you want to put stuff in front of it you've got a bit more leeway - we can turn our eyes inwards pretty far. But remember that you don't want to stretch it too much (the farther an object is from the actual screen distance, the harder it gets for the eyes to focus), so it's a good idea to apply the same limits also in the other direction. In general the perceived distance depends on a) your pupillary distance, b) the distance between you and the screen, and c) the angle at which you look at the screen (the further you're at the side, the less 3D effect you'll get but hey, that's at least not making it worse), so don't even try to do anything physically correct here - it's an effect, nothing more. :)

Here's a nice graph to show how the distance between the l/r images affects the perceived depth: https://www.desmos.com/calculator/tkc6fz1pw2 - the x axis is the image distance in pixels, the y axis is perceived depth in meters. r is the horizontal screen resolution, w the screen width (m), d the pupillary distance (mm) and s the distance of the viewer from the screen (m). You can play around with the values and see how they affect the depth effect.

From experience, stay within 2m from the screen depth, that seems to work well enough. Check the graph, it's really not that many pixels!

How does this translate to realtime 3D?

Now let's say you got your 3D engine or shader. How to make that anaglyph? Pretty easy: Render the scene twice, with two different cameras. Then take the red channel from the left camera and the green and blue channels from the right camera, combine them, voila!

To shift the camera left and right, you actually need to take two things into account: The eye distance in your scene, and the one in the real world.

Move the camera slightly left/right in its own space. Don't get fancy and try to rotate it, just shift it a bit. Now you're rendering from two slightly offset perspectives, and that correctly models the two eyes in the scene.

But what's still missing is the fact that the eyes of your dear viewers also look at the screen from slightly different positions, so what you also need to do now is shift/scroll the two images left/right in 2D. This is easy to to in a shader (just add something to your fragment x coordinate), but luckily also for conventional engines there's a neat trick: One of the values in the projection matrix (the one that maps z to x) does exactly that.

If you'd like to see an example how this can be done, here's the basic setup for Cables that won me the 3D compo in 2022: https://cables.gl/p/S6k4me - look, learn and copy as you like.

As for how much you need to shift what - start with the 2D shift. This will set the offset for when the virtual cameras look into infinity, thus setting the "Z far" plane for your demo in our physical world. Remember what I've written above: Better err on the side of caution, this really shouldn't be more than a percent of your screen width, better less, if you want your entry to work. After that you can set the camera offset in 3D space, and what this does is control the strength of the 3D effect. If you set it in a way that your visuals pop out of the screen as much as they go into it, you should be safe. But remember - the depth effect absolutely depends on the screen size so unless you've got a 5m wide big screen at home, you won't be able to gauge how strong the effect is going to be. Again, don't overdo it!

And what about colors?

But on a slightly more positive note: Yes, you can use colors in a limited way, at least on platforms that have more than the usual 8 full-RGB colors. The mind is able to piece together the red and cyan image to a quasi-consistent overall color, so if you use only slightly saturated colors, filter those through red and cyan for the two images, and avoid having too much colorful detail, it'll work and it'll let you stand out from the monochromatic lot. So have fun!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment