There is a nice GIF illustrating a technique called "frustum culling" in this Kotaku article: http://kotaku.com/horizon-zero-dawn-uses-all-sorts-of-clever-tricks-to-lo-1794385026
The interwebs being what they are, this has also led to some controversy.
Some people have interpreted the opening sentence "Every time you move the camera in Horizon Zero Dawn, the game is doing all sorts of under-the-hood calculations, loading and unloading chunks of world to ensure that it all runs properly," as being about the GIF; that's not what frustum culling does, but that's probably not what the article's author meant anyway.
Rather, for the real meaning, we turn to the 2nd paragraph, "the GIF above [...] shows how the game is secretly rendering giant chunks of terrain on the fly as you move your camera around."
Gamedevs may snark a little at this, but if you just replace "rendering" with "not rendering" it's pretty indistinguishable from how a developer might phrase it. The bigger issue for developers is that it's just not that big a deal, and I want to give non-developers a little insight into this.
Two things: one, I'm going to explain stuff you might already know. This isn't condescension; I just want to make sure we're on the same page. Two, experts will realize I'm oversimplifying some things a lot, but that's to avoid excessive circumlocutional language explaining all the caveats that would otherwise need to be there to be accurate while still keeping it as simple as possible. (I still put in some explicit caveats to keep reminding you of this.)
Let's start with FPS, frames-per-second, as something gamers generally know about.
The higher the FPS the smoother the animation; higher FPS is better, at least to a point. To a first approximation, computers can do a fixed amount of work in a fixed amount of time. The higher the FPS, the less time is available to create a frame of imagery, which means the less work the computer can do to create one image frame.
Thus, game developers are obsessed with "performance", with making things go faster, by making them do less work (because then the game runs smoother, or because then they can put more things in the game and it looks nicer). There are lots of ways to do less work, and game programming and graphics programming are filled with ways of doing this.
To a first approximation, the relationship between the amount of work needed to draw some things and the number of things you can draw is a simple proportion. The more things you need to draw, the more time it takes; the less you draw, the less time it takes.
The GPU is the chip that really draws the graphics. However, typically the computer's CPU does a lot of setup and decides every frame what things should be drawn, as well as doing physics and AI and lots of other stuff. Both CPU and GPU work can determine the final frame rate, but let's just think about GPU to keep things simple.
To a very bad first approximation, there are two kinds of work that the GPU is doing. The first is related to the 3D geometric description of objects in the scene. The second is work drawing the individual pixels on the screen. The first kind of work we'll simplify down to something called "vertex shading". The second kind of work we'll simplfy down to "pixel shading".
FRUSTUM CULLING PERFORMANCE
When we frustum cull as shown in the HZD gif, we have the CPU skip telling the GPU to draw anything that lies outside of the "view cone" of the camera. (Because the display is a rectangle, that view cone is a four-sided pyramid; a pyramid with its top cut off is called a "frustum" in geometry, and for various arcane technical reasons the view cone used in 3D graphics does have its point cut off, hence the name. And we are skipping drawing, i.e. discarding from consideration, i.e. "culling".) This may also save the CPU some work (or cost it some!), but let's stick to the GPU.
Because the only things that are skipped are things outside the view cone, they were never visible anyway. If we asked the GPU to draw them, it would do a bunch of (different, more expensive) work determining they weren't visible, so skipping them in the CPU doesn't change the final image at all. Because no pixels would have been affected, and because of the way the system works, no pixel shader work would ever have been done for those objects at all, only vertex shader.
So, frustum culling saves vertex shader work but not pixel shader work (because no pixel shader work would have been done).
To a very rough approximation, an outdoors scene with a 90-degree view cone sees 1/4 of the full 360 degrees, so if we compare what would happen if we draw the full scene to drawing only within the view cone, we skip approximately 3/4 of the vertex shading work.
GPUs use the same hardware to compute vertex shaders as to compute pixel shaders, so, for a fixed frame rate, aka a fixed time budget per frame, aka a fixed work budget per frame, the time saved on vertex shading can now be spent on more pixel shading, either by increasing the complexity of the pixel shading (e.g. image quality), or by increasing the number or geometric complexity of objects in the scene (spending that savings on both vertex and pixel shading).
Although things are always changing and even in a single generation there may be abig difference between AAA games and the "average" non-AAA game and the average indie game, one can imagine there might once have been a time where in many games, pixel shading costs might dominate over vertex shading costs, and in that situation frustum culling might not make that big a difference to frame/rate quality in practice, since reducing vertex shading isn't saving "that much" work. (Indeed, historically, there was once a time when GPUs used different units for vertex shading than for pixel shading, so saving vertex shading work didn't actually provide any extra resources for pixel shading, and thus no way to leverage the improvement, although even that conclusion is an oversimplification.) Part of the experience of being a graphics developer is that your old knowledge ("frustum culling is always worth it") can become invalid as technology changes... and then later become valid again.
FRUSTUM CULLING IN PRACTICE
Nearly ever game out there uses frustum culling, because it is simple and cheap and (usually?) effective. The Toy Story movies used frustum culling. Every pre-rendered game cutscene from the 90s used frustum culling. It is probably the single most ubiquitous "optimization" found in graphics.
One thing frustum culling is not doing is "loading" or "unloading" the scene (and while the Kotaku article doesn't claim this, I have seen other semi-technical articles that used this kind of terminology for culling).
Generally, developers use loading/unloading to refer to things being in memory ("RAM"). It may refer to loading from disk into the CPU's memory, or it may refer to loading from the CPU's memory to the GPU's memory (sometimes called "uploading").
"Streaming" is often used to describe the process of loading/unloading chunks of terrain as a camera moves in an open world. This is not directly related to performance as defined above; typically there is simply a finite amount of RAM; since the CPU/GPU can only render things that are in RAM, it is necessary to load/unload content from that memory. However, in most games you can turn randomly and unpredictably, and it doesn't make sense to load/unload content as you turn--you might need it again almost immediately, so better to just keep it in RAM and not draw it.
(An exception: the id software game Rage could load/unload textures as you turned the camera. Another: it seemed like some versions of Minecraft might have done some loading/unloading depending on camera angle, though game developers do not generally consider Minecraft a paragon of graphics optimization or graphics quality.)
It can look like objects are "unloaded", because the computer isn't trying to draw the things it knows can't be seen; but in the case of frustum culling, they're still "there", they're just "hidden" in some sense; whereas if unloaded, there is a sense in which the computer wouldn't even know what was there at all. For example, if something is unloaded, the game typically can't compute physics in that area; so everything "far away" might stop moving because they're unloaded (in which case you can't see them, so you can't see that they stop moving), but things close but behind the camera do not stop moving (so they can still drive past you, or get in your way if you're backing up, or explode). As always, there are exceptions.
GRAPHICS OPTIMIZATION IN GENERAL
As stated, graphics "optimization" is important because it frees up GPU time to render more, better stuff. Thus, the practice of graphics (as well as the "academic" graphics literature) is replete with ways of saving work on the GPU, such as:
- frustum culling (not trying to draw things that can't be seen because of the view cone)
- occlusion culling (not trying to draw things that can't be seen because they're behind other things)
- level of detail (skipping small geometric details when things are far enough away those details can't be made out anyway)
- simpler mathematical approximations to the equations governing how things reflect light
There are also hosts of ways of saving work on the CPU, but I've bored you enough already.
Almost every game uses some or more of these "clever tricks", and this is the source of some of the developer backlash over the Kotaku piece: there is no indication that HZD is doing anything more clever than anybody else, and so some gamedevs may perceive it as needless PR puffery to single it out on that front. On the other hand, the Kotaku article is actually calling attention to (giving PR to?) a documentary, not to HZD or to HZD devs, so.
(Of course maybe HZD does have some unique clever tricks up its sleeve! If so, we may find out some day, because the gamedev community is (perhaps surprisingly) open about sharing clever tricks with each other.)