What Is Frame Generation, and Should You Use It In Your Games?

by oqtey
What Is Frame Generation, and Should You Use It In Your Games?

Earlier this year, Nvidia announced its new line of 50 Series GPUs with a hot new feature in tow: “Multi Frame Generation.” Building on early frame gen tech, these new GPUs allow games to create multiple video frames based on a single frame rendered the normal way. But is that a good thing? Or are these just “fake frames?” Well, it’s complicated.

On a very basic level, “frame generation” refers to the technique of using deep learning AI models to generate frames in between two frames of a game rendered by the GPU. Your graphics card does the more grindy work of creating “Frame One” and “Frame Three” based on 3D models, lighting, textures, etc., but then frame generation tools take those two images and make a guess at what “Frame Two” should look like.

Multi Frame Generation takes this a step further. Instead of just generating one extra frame, it generates several. This means that, on the highest settings, three out of every four frames you see could be generated, rather than rendered directly. Whether that’s a good thing, though, depends heavily on what type of game you play and what you want your gaming experience to be.

What’s the difference between upscaling and frame generation?

Nvidia’s new Multi Frame Generation comes as part of its announcement of DLSS 4. DLSS stands for Deep Learning Super Sampling and, as the name implies, its earlier iterations weren’t about frame generation, but rather supersampling (or upscaling). 

In this version of the tech, a GPU would render a lower-resolution version of a frame—say, 1080p—and then upscale it to a higher resolution like 1440p or 2160p (4K). The “deep learning” in DLSS refers to training a machine learning model on each game individually to give the upscaler a better idea of what the higher res frame should look like.

Nowadays, DLSS refers more to a whole suite of tools Nvidia uses to eek out better performance, and the above method is usually referred to as Super Resolution. Frame generation, on the other hand, takes two entire frames and generates an entirely new frame between them from scratch.

Of course, it’s also possible to use all of this tech simultaneously. You can end up in situations where your GPU is technically only rendering one lower-resolution frame for every two—or more, on the newest GPUs—full-res frames you see. If that sounds like a lot of extrapolation, well, it is. And, incredibly, it works pretty well. Most of the time.

When is frame generation useful?

In a relatively short amount of time, we’ve seen the demand placed on GPUs explode. As mentioned above, 4K resolutions contain quadruple the amount of pixel information as 1080p ones. Moreover, while media like movies and TV have stuck at a relatively consistent 24-30 frames per second, gamers increasingly demand at least 60fps as a baseline, often pushing that even higher to 120fps or 240fps for high-end machines. And do not get me started on Samsung’s absurd display capable of supporting up to 500fps.

If your GPU had to calculate every pixel of a 4K image 120 (or 500) times every second, the resulting fire coming from your PC would be visible from space—at least for games with the kind of detailed, ray-traced graphics we’re used to from AAA titles. 

From that perspective, frame generation isn’t just helpful, it’s necessary. On Nvidia’s latest GPUs, Multi Frame Generation can allow a game to increase its frame rate by multiple hundred frames per second even in 4K, while still looking pretty great. That’s just not a frame rate that’s possible at that resolution without an industrial rig.

When it works (and we’ll come back to that), frame generation can allow for smoother movement and less eye strain. If you want to get a taste of the difference, this little tool lets you experiment with different frame rates (as long as your display supports it). Try comparing 30fps to 60fps or 120fps and follow each ball with your eyes. The effect gets even more stark if you turn off motion blur which, for many games, would be the default.

For chaotic games with a lot of movement, those extra frames can be a huge benefit, even if they’re not exactly perfect. If you were to take a close look at the images frame-by-frame, you might see some artifacts, but they might be less noticeable while playing—at least, that’s how it should work in theory.

What are the downsides of frame generation?

In practice, how well this tech works can vary greatly on a per-game basis, as well as by how powerful your machine is. For example, going from 30fps to 60fps with frame generation can look jankier than if you’re going from 60fps to 120fps. This is due, at least in part, to the fact that at lower frame rates, there’s more time in between reference frames, which means more guess work for the frames being generated. That leads to more noise and artifacts.

Whether those artifacts will bother you is also highly subjective. For example, if you’re swinging through the city in Spider-Man 2, and the trees in the background look stranger than they should, would you even notice? On the other hand, for slower-paced atmospheric games like Alan Wake II, where graphical detail and set design is more important for the vibes, ghosting and smearing can seem more pronounced.


What do you think so far?

It should also be noted that artifacts aren’t necessarily inherent to all frame generation. For starters, better input frames can lead to better frame generation. Nvidia, for example, is touting new models behind Super Resolution and Ray Reconstruction—a whole other piece of tech for improving ray tracing results that we simply don’t have enough time to get into—to improve the images that get passed to the frame generation portion of the pipeline.

You can think of it a bit like a giant, complex version of a game of telephone. The only way to get the most accurate, detailed frames from your game is to render them directly. The more you add steps to extrapolate extra pixels and frames, the more chances there are for mistakes. However, our tools are getting progressively better at cutting down on those mistakes. So, it’s up to you to decide whether more frames or more detail is worth it for you.

Why frame generation is (probably) bad for competitive games

There’s one major exception to this whole argument, and that’s when it comes to competitive games. If you play online games like Overwatch 2, Marvel Rivals, or Fortnite, then smooth motion isn’t necessarily your primary concern. You might be more concerned with latency—which is to say, the delay between when you react to something, and when your game has registered your reaction.

Frame generation complicates latency issues because it requires creating frames out of order. Recall our earlier example: The GPU generates Frame One, then Frame Three, then the frame generator comes up with what Frame Two should be. In that scenario, the game can’t actually show you Frame Two until it’s figured out what frame three should be.

Now, in most cases this isn’t usually a problem. At 120fps, each frame is only on screen for about 8.33 milliseconds. Your brain can’t even register that short of a delay, so it’s not likely to cause a huge issue. In fact, human reaction time is typically measured in the hundreds of milliseconds. For a completely unscientific proof, go ahead and try out this reaction time test. Let me know when you get under 10 milliseconds. I’ll wait.

However, this does become an issue in competitive gaming, because frame delays aren’t the only latency issues you’re dealing with. There’s latency between your keyboard and your computer, between your computer and the server, and between the server and the other players. 

Most of those individual links in the chain might be pretty low, but they have to get synced up somewhere. That “somewhere” is in the game’s tick rate. This is how often the game you’re playing updates on the server. For example, Overwatch 2 has a tick rate of 64. That means that every second, the server updates what has happened in the game 64 times, or once every 15.63 milliseconds.

That’s just enough that if, say, your game shows you our rhetorical Frame One, where the enemy Cassidy is in your crosshairs, but hasn’t yet updated to Frame Three, when he’s not, the server could have ticked over before your screen has updated. That could mean your shot registers as a miss even though it feels like it should have hit. This is also the one issue that can actually get worse with Multi Frame Generation.

There are ways to mitigate this hit—for example, Nvidia’s Reflex tech that reduces input latency in other areas—but it’s not something that can be avoided entirely. If you’re playing competitive online games, you’re better off turning your graphics settings down lower to get a better frame rate, rather than using frame generation for now.

Related Posts

Leave a Comment