Skip to content

Instantly share code, notes, and snippets.

Last active September 28, 2022 17:18
What would you like to do?

How to properly Deadline Three-Dee compo

The anaglyphic 3D compo has been a staple of Deadline for years now, and yet often enough there are entries where the 3D effect just doesn't work at the party, and what looked good on the creator's screen resulted in very confusing 3D and double images when watched at the party, or worse - people in the audience got headaches and had to take off their 3D glasses when the entry was on.

Most of this is because of pretty simple physics, and it mostly boils down to the fact that watching a stereoscopic image on a small screen is a very different thing than watching the same thing on a big projection screen.

How do we even see 3D?

Basically, the way the human visual system focuses on different distances is pretty simple. First, the lenses in your eyes need to set the focus to the distance of the object that you're watching, and second both of your eyes have to turn towards that object to put it into the center of both visual fields.

Watching stereoscopic content on a screen is already harder on the brain than just looking at any object in reality, because in order to do so, you need to decouple those two mechanisms. If you look at a screen, the "lens focus" always needs to be set to the distance between your eyes and the screen instead of the perceived distance of the object you're looking at - and that's already a thing the human mind doesn't ever need to do in our reality. The way our brains are programmed from our life experience, those two systems (focus and moving your eyes) are always completely coupled to each other, and suddenly you need to use them independently - your eyes need to turn towards where the perceived distance of the object is, but your lenses need to stay focused on the distance to the screen. How well the brain is able to adapt to that new thing it needs to do varies quite a lot between different persons, and it's the reason so many people can't watch 3D movies without getting headaches.

This is just a fact of our biology, and something you don't have control over. But this makes it all the more important to author your content in a way that it doesn't end up too far either in front or behind the screen, because the more it does, the more you strain the viewer's ability to actually look at what's happening.

What is way more important is how far you make the viewer turn their eyes though, because that's where a lot of entries just fall apart.

Depth and pupillary distance

When you look at any object, what's happening is that you turn your eyes slightly inward to make them both target what you're looking at. Imagine a head with two eyes and some object at some distance, and draw lines between the eyes, and each eye and the object. What you get is a triangle, and the angles at the eyes get steeper the closer the object is. But the most important part is that if you increase the distance between the head and the object towards infinity, those angles never get shallower than 90 degrees. If the object is infinitely far away, your "eye rays" are pretty much two parallel lines.

This is the point: The eyes will never turn more outward than perfectly straight. Anything else is just not physically possible, and as soon as the left and right images are further apart than the distance between your left and right eye, the brain will just nope out and instead of a depth effect you'll get two weird, flickery, hard to decipher images.

So this means that The distance between the left and right image on screen must never be bigger than the distance between the viewer's left and right eye. And this is why a lot of entries work on a computer screen but not at the party - the bigger the screen, the bigger the distance between the two images. What's a nice little 3D effect on your monitor becomes a physically impossible mess on the big screen really quick.

What this means is that you have to account for not only the screen size but also the smallest pupillary distance of anyone in your audience. From :

PD is measured in millimeters (mm). The average pupillary distance for an adult is about 63 mm, but this is not a number you’ll want to assume. Pupillary distance can vary widely — roughly between 51 mm and 74.5 mm for women and 53 mm and 77 mm for men.

So if we disregard any children in the audience, let's assume a generous 50mm here. But it's not even that - remember that you really want to limit your distance range around the screen so people can watch it without straining their eyes and minds too much, so honestly, it's more like 30 or 40 millimeters max.

So er, can you give me the TL:DR already?

Deadline 2022's big screen is about 5 meters wide. So if we assume a 1920 pixel wide screen mode, the maximum distance that the left and right image can realistically diverge is 1920px * 40mm / 5000mm, or about 15 pixels or more generally, 0.8% of your horizontal resolution. And of course it's way worse if you have a lower resolution than that. For example if you've got an entry on a C64, if we factor in 4:3, the border and everything, that's only about four pixels that the left and right image can be apart from each other before some people can't interpret it as 3D any more. Yes, it's that little.

All these calculations only apply to content that's supposed to end up behind the screen, and if you want to put stuff in front of it you've got a bit more leeway - we can turn our eyes inwards pretty far. But remember that you don't want to stretch it too much, so it's a good idea to apply the same limits also in the other direction. In general the perceived distance depends on a) your pupillary distance, b) the distance between you and the screen, and c) the angle at which you look at the screen (the further you're at the side, the less 3D effect you'll get but hey, that's at least not making it worse), so don't even try to do anything physically correct here - it's an effect, nothing more. :)

Here's a nice graph to show how the distance between the l/r images affects the perceived depth: - the x axis is the image distance in pixels, the y axis is perceived depth in meters. r is the horizontal screen resolution, w the screen width (m), d the pupillary distance (mm) and s the distance of the viewer from the screen (m). You can play around with the values and see how they affect the depth effect.

And what about colors?

But on a slightly more positive note: Yes, you can use colors in a limited way, at least on platforms that have more than the usual 8 full-RGB colors. The mind is able to piece together the red and cyan image to a quasi-consistent overall color, so if you use only slightly saturated colors, filter those through red and cyan for the two images, and avoid having too much colorful detail, it'll work and it'll let you stand out from the monochromatic lot. So have fun!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment