Skip to content

Instantly share code, notes, and snippets.

@viridia
Last active April 9, 2024 18:51
Show Gist options
  • Save viridia/f761f1813a9edeca2ba90b1d8468160d to your computer and use it in GitHub Desktop.
Save viridia/f761f1813a9edeca2ba90b1d8468160d to your computer and use it in GitHub Desktop.

Overview

There's a desire to refactor the existing "Target Camera" and "Render Layers" mechanisms into a more unified approach. This breaks down into three separate but related features.

Unifying the "target camera" and "render layers" concepts.

The motivation here is just API simplicity: we have two different mechanisms for controlling visibility (well, three if you count Visibility), some of which are specific to UI and some which are not.

Logically speaking, both "target camera" and "render layers" are representations of sets: the target camera is a set whose members are entity ids, and render layers is a set whose members are the integers 0..31. There's an additional restriction that there can only be a single target camera in a set, this is mainly for performance reasons so that we don't end up having to have a Vec<Entity>.

The concept of "target camera" could be generalized somewhat by replacing it with a generic entity id. In other words, the entity id doesn't need to be a camera specifically, it just has to be a valid entity id. We never actually dereference the id, we just use it as a unique identifier. (This means that multiple cameras could have the same target id if we wanted).

Making render layers inheritable / downward propagation.

The primary motivation for this revolves around spawning scenes from an asset such as a GLTF file. In current Bevy, when you load a scene and then spawn it, setting render layers on the scene node has no effect on the individual objects within the scene. To actually get the child objects to appear on the correct layers means that you somehow need to traverse the scene's descendants and modify them. This means that scenes cannot be treated as an encapsulated black box, but instead requires exposing the internal details of the scene to user code.

Downward propagation of render layers - similar to the way TargetCamera and Visibility are inheritable - would solve this problem.

Increasing the number of render layers.

Most apps have a limited number of cameras for performance reasons, and as such aren't likely to run out of layer bits. However, certain kinds of apps make extensive use of off-screen render targets, which may be very simple (e.g. a single quad plus a shader), and may have a large number of these. Each off-screen target requires a dedicated camera, so you run out of layer bits pretty quickly.

If, on the other hand, we use entity ids as layer ids, then there's no restriction on the number of layers you can have. The only restriction is that each object can only be on a single layer.

Use Cases

Portals into separate realms

A game may have portals that lead to other levels / worlds / realms. Each "realm" is a separate 3D space, with it's own scene graph and lighting. These different realms occupy the same (or at least, overlapping) spatial coordinates: that is, realm A and realm B might both have visible content at coordinate (0, 0, 0). The reason for the overlap is that the realms are often authored independently.

An example of this might be a cave entrance: the outside world might be one realm, and the cave interior a second realm.

In this use case, there's a "main" camera, as well as a camera for each portal which is currently visible on screen. Multiple portals can point to different realms, or different locations within the same realm. Any given camera should only be able to "see" the contents of a single realm. If two cameras are targeting the same realm, then lighting and shadows should be consistent between them.

In this case, the number of layer bits needed is equal to the number of realms currently loaded in memory: when a realm asset is loaded, a layer bit can be assigned to it dynamically. This layer bit should apply to all objects within that realm, including lights and loaded mesh assets. A portal into that realm will have a camera that has a layer bit that is the same as the target realm.

Because rendering portals is expensive, there will be strong downward pressure on the number of portals visible at any given moment. Because of this, the total number of cameras and layers visible at a time will generally be small.

Because realms are large complex structures, it's cumbersome to have to assign layer bits to each object in a realm. It would be better if the user could simply assign a layer bit at the root object of the realm, and then have the layer bits propagate downward automatically.

Reducing Clutter in an Editor

The concept of "layers" actually derives from digital content creation apps such as Blender, Maya or Autocad. Layers are used as a way to manage complex scenes where there's a lot of visual clutter. The user can temporarily hide some of the clutter by assigning different parts of the scene to different layers and then toggling the visibility of the layers.

In this case, there's generally only a single camera, or possibly several cameras if there's a split-screen view. Generally all cameras will have the same set of layer bits: toggling the visibility of a layer affects all cameras. (The only exception here might be a special camera for previewing, which ignores layer bits).

Lights and shadows: whether lights are toggleable is an app-specific decision, but in all cases the lighting should be consistent. If an object is on a layer which is not visible, it's shadow should also not be visible.

The number of layers is strictly limited by the design of the UI - generally it will be some small number like 16 or 32.

Minimap

This is a different kind of clutter reduction: say you have a game which has a complex world, but you also have a top-down minimap. The minimap shows the same scene as the primary camera, but with small details removed to avoid clutter. In this case, there are two layer bits: one representing "large" objects that should show up on both the primary camera and the minimap camera, and "small" objects that should only show up on the primary camera.

Showing nodes in a node graph

An application for editing shaders or visual effects might want to have a large number of "preview windows", that is, small postage-stamp-sized widgets which show the output of a single camera. An example would be a node-graph editor for shaders, where each node displays the shader output at that stage.

Each preview node has its own dedicated render target and camera. The individual scenes being rendered are generally very simple, maybe only a single quad - but there will be a lot of them. Certainly more than 32. However, each node only draws on a single camera, so if we use entity ids as the layer id, we can have an unlimited number of layers, but still conforming to the restriction that each render group can only contain a single entity id.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment