Skip to content

Instantly share code, notes, and snippets.

Last active January 23, 2024 15:24
Show Gist options
  • Save nothings/913056601b56e5719cc987684a16544e to your computer and use it in GitHub Desktop.
Save nothings/913056601b56e5719cc987684a16544e to your computer and use it in GitHub Desktop.
Why Frustum Culling Matters, and Why It's Not Important

There is a nice GIF illustrating a technique called "frustum culling" in this Kotaku article:

The interwebs being what they are, this has also led to some controversy.

Some people have interpreted the opening sentence "Every time you move the camera in Horizon Zero Dawn, the game is doing all sorts of under-the-hood calculations, loading and unloading chunks of world to ensure that it all runs properly," as being about the GIF; that's not what frustum culling does, but that's probably not what the article's author meant anyway.

Rather, for the real meaning, we turn to the 2nd paragraph, "the GIF above [...] shows how the game is secretly rendering giant chunks of terrain on the fly as you move your camera around."

Gamedevs may snark a little at this, but if you just replace "rendering" with "not rendering" it's pretty indistinguishable from how a developer might phrase it. The bigger issue for developers is that it's just not that big a deal, and I want to give non-developers a little insight into this.

Two things: one, I'm going to explain stuff you might already know. This isn't condescension; I just want to make sure we're on the same page. Two, experts will realize I'm oversimplifying some things a lot, but that's to avoid excessive circumlocutional language explaining all the caveats that would otherwise need to be there to be accurate while still keeping it as simple as possible. (I still put in some explicit caveats to keep reminding you of this.)


Let's start with FPS, frames-per-second, as something gamers generally know about.

The higher the FPS the smoother the animation; higher FPS is better, at least to a point. To a first approximation, computers can do a fixed amount of work in a fixed amount of time. The higher the FPS, the less time is available to create a frame of imagery, which means the less work the computer can do to create one image frame.

Thus, game developers are obsessed with "performance", with making things go faster, by making them do less work (because then the game runs smoother, or because then they can put more things in the game and it looks nicer). There are lots of ways to do less work, and game programming and graphics programming are filled with ways of doing this.

To a first approximation, the relationship between the amount of work needed to draw some things and the number of things you can draw is a simple proportion. The more things you need to draw, the more time it takes; the less you draw, the less time it takes.


The GPU is the chip that really draws the graphics. However, typically the computer's CPU does a lot of setup and decides every frame what things should be drawn, as well as doing physics and AI and lots of other stuff. Both CPU and GPU work can determine the final frame rate, but let's just think about GPU to keep things simple.

To a very bad first approximation, there are two kinds of work that the GPU is doing. The first is related to the 3D geometric description of objects in the scene. The second is work drawing the individual pixels on the screen. The first kind of work we'll simplify down to something called "vertex shading". The second kind of work we'll simplfy down to "pixel shading".


When we frustum cull as shown in the HZD gif, we have the CPU skip telling the GPU to draw anything that lies outside of the "view cone" of the camera. (Because the display is a rectangle, that view cone is a four-sided pyramid; a pyramid with its top cut off is called a "frustum" in geometry, and for various arcane technical reasons the view cone used in 3D graphics does have its point cut off, hence the name. And we are skipping drawing, i.e. discarding from consideration, i.e. "culling".) This may also save the CPU some work (or cost it some!), but let's stick to the GPU.

Because the only things that are skipped are things outside the view cone, they were never visible anyway. If we asked the GPU to draw them, it would do a bunch of (different, more expensive) work determining they weren't visible, so skipping them in the CPU doesn't change the final image at all. Because no pixels would have been affected, and because of the way the system works, no pixel shader work would ever have been done for those objects at all, only vertex shader.

So, frustum culling saves vertex shader work but not pixel shader work (because no pixel shader work would have been done).

To a very rough approximation, an outdoors scene with a 90-degree view cone sees 1/4 of the full 360 degrees, so if we compare what would happen if we draw the full scene to drawing only within the view cone, we skip approximately 3/4 of the vertex shading work.

GPUs use the same hardware to compute vertex shaders as to compute pixel shaders, so, for a fixed frame rate, aka a fixed time budget per frame, aka a fixed work budget per frame, the time saved on vertex shading can now be spent on more pixel shading, either by increasing the complexity of the pixel shading (e.g. image quality), or by increasing the number or geometric complexity of objects in the scene (spending that savings on both vertex and pixel shading).

Although things are always changing and even in a single generation there may be abig difference between AAA games and the "average" non-AAA game and the average indie game, one can imagine there might once have been a time where in many games, pixel shading costs might dominate over vertex shading costs, and in that situation frustum culling might not make that big a difference to frame/rate quality in practice, since reducing vertex shading isn't saving "that much" work. (Indeed, historically, there was once a time when GPUs used different units for vertex shading than for pixel shading, so saving vertex shading work didn't actually provide any extra resources for pixel shading, and thus no way to leverage the improvement, although even that conclusion is an oversimplification.) Part of the experience of being a graphics developer is that your old knowledge ("frustum culling is always worth it") can become invalid as technology changes... and then later become valid again.


Nearly ever game out there uses frustum culling, because it is simple and cheap and (usually?) effective. The Toy Story movies used frustum culling. Every pre-rendered game cutscene from the 90s used frustum culling. It is probably the single most ubiquitous "optimization" found in graphics.


One thing frustum culling is not doing is "loading" or "unloading" the scene (and while the Kotaku article doesn't claim this, I have seen other semi-technical articles that used this kind of terminology for culling).

Generally, developers use loading/unloading to refer to things being in memory ("RAM"). It may refer to loading from disk into the CPU's memory, or it may refer to loading from the CPU's memory to the GPU's memory (sometimes called "uploading").

"Streaming" is often used to describe the process of loading/unloading chunks of terrain as a camera moves in an open world. This is not directly related to performance as defined above; typically there is simply a finite amount of RAM; since the CPU/GPU can only render things that are in RAM, it is necessary to load/unload content from that memory. However, in most games you can turn randomly and unpredictably, and it doesn't make sense to load/unload content as you turn--you might need it again almost immediately, so better to just keep it in RAM and not draw it.

(An exception: the id software game Rage could load/unload textures as you turned the camera. Another: it seemed like some versions of Minecraft might have done some loading/unloading depending on camera angle, though game developers do not generally consider Minecraft a paragon of graphics optimization or graphics quality.)

It can look like objects are "unloaded", because the computer isn't trying to draw the things it knows can't be seen; but in the case of frustum culling, they're still "there", they're just "hidden" in some sense; whereas if unloaded, there is a sense in which the computer wouldn't even know what was there at all. For example, if something is unloaded, the game typically can't compute physics in that area; so everything "far away" might stop moving because they're unloaded (in which case you can't see them, so you can't see that they stop moving), but things close but behind the camera do not stop moving (so they can still drive past you, or get in your way if you're backing up, or explode). As always, there are exceptions.


As stated, graphics "optimization" is important because it frees up GPU time to render more, better stuff. Thus, the practice of graphics (as well as the "academic" graphics literature) is replete with ways of saving work on the GPU, such as:

  • frustum culling (not trying to draw things that can't be seen because of the view cone)
  • occlusion culling (not trying to draw things that can't be seen because they're behind other things)
  • level of detail (skipping small geometric details when things are far enough away those details can't be made out anyway)
  • simpler mathematical approximations to the equations governing how things reflect light

There are also hosts of ways of saving work on the CPU, but I've bored you enough already.


Almost every game uses some or more of these "clever tricks", and this is the source of some of the developer backlash over the Kotaku piece: there is no indication that HZD is doing anything more clever than anybody else, and so some gamedevs may perceive it as needless PR puffery to single it out on that front. On the other hand, the Kotaku article is actually calling attention to (giving PR to?) a documentary, not to HZD or to HZD devs, so.

(Of course maybe HZD does have some unique clever tricks up its sleeve! If so, we may find out some day, because the gamedev community is (perhaps surprisingly) open about sharing clever tricks with each other.)

Copy link

ghost commented Apr 20, 2017

Are the "various arcane technical reasons the view cone used in 3D graphics [has] its point cut off" the fact that it terminates at a computer screen?

Copy link

No, it's about a thing called a "near clipping plane" which AFAICT primarily exists because GPU designers want to save how much computation is needed to update a thing called a "z-buffer" or "depth-buffer", and some related improvements it gives the hardware (but with several downsides that most gamedevs seem to have decided they can live with).

Copy link

Ken Levine redirected me here on Twitter, as I am in the laymen demographic who never understood how this worked. I am always fascinated with how rendering works however. Thanks for writing this, it explains and demystifies so much of the "magic" behind how a camera coordinates with the FOV of a player camera, though in a good way.

Your optimization section in particular made me recall using the free camera in Counter Strike, I would move through the level geometry and notice that anything beyond what a player could see in a given map wasn't actually rendered. You would go behind background shrubs and it would be a couple of triangular masses. This puts into perspective how much more your hardware could focus on rendering what was happening in the arena and not on extraneous details. It would also explain how Counter Strike could run on potato machines for so long.

Now open world games and such just eat your hardware, asking ridiculous amounts of resources to maintain a LOD over absurd distances compared to games of yore.

Copy link

rahwang commented Apr 21, 2017

Thanks for the post! Well-structured explanation. :)

This is more in depth and technical (for fellow graphics devs), but my coworker @austinEng wrote a nice article on frustum culling in the open source renderer we work on. It has some nice visuals.

Copy link

@MrNonSequitur Yes, exactly. We don't render the volume between your eye and the screen. That volume is the top of the pyramid that gets cut off. @nothings The "near clipping plane" is the plane of the screen. That's what does the cutting, as it were.

Copy link

nothings commented May 18, 2017

@frankleonrose It's not why we do the near clipping plane culling. You can set it to an arbitrary distance, and nobody sets it to the distance between the eye and the screen, which would be anywhere from 1-10 feet, and would potentially hide lots of important stuff (like the wall 3 inches in front of your eyes if you walk up to a wall). There is no good reasons (outside the W-buffer stuff I was avoiding the technical details about) to not render stuff between the apex of the pyramid and the near plane; it's stuff you ought to see, and it occasionally causes visible glitches in games where you 'see through the world' in an entirely inappropriate way.

Basically, if the near plane ever actually does clip anything, you'll have a visible glitch anyway. (There are exceptions like polygons you can intentionally move through, like water surfaces.)

Copy link

Pharap commented Jun 22, 2017

I know I'm late to the party, but I'm glad someone went to the trouble of explaining why us gamedevs were pointing and laughing.
It's like a chef watching someone be amazed at how putting cake mix into an oven 'magically' turns it into a cake.
View frustum culling is something so fundamental that pretty much all game devs are aware of it and what it does even if they don't know the maths behind it or have never had to write code to do it themselves.
Out of interest the video itself was actually talking about making the engine fast enough to load bits of scenery on the fly and only loading the areas the player can see and not specifically about view-frustum culling. Naturally the article printed a naive, slightly skewed version of what the video was disussing in an attempt to attract interest in the article.

On a side note:

simpler mathematical approximations to the equations governing how things reflect light

Quake III knows all about approximations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment