After the Blackwell RTX 50-series announcement last night, there's been some confusion. In a live Q&A session today we Nvidia CEO Jensen Huang, we were able to ask for clarification on DLSS 4 and some of the other neural rendering techniques that Nvidia demoed. With AI being a key element of the Blackwell architecture, it's important to better understand how it's being used and what that means for various use cases.
One of the big performance "multipliers" with DLSS 4 is multi frame generation. With DLSS 3, Nvidia would render two frames and then use AI to interpolate an intermediate frame. This adds some latency to the game rendering pipeline and also causes some frame pacing issues. On the surface, it sounded as though DLSS 4 would do something similar, but instead of generating one frame it would generate two or three frames — multiple frames of interpolation, in other words. It turns out that's incorrect.
When we asked how DLSS 4 multi frame generation works and whether it was still interpolating, Jensen boldly proclaimed that DLSS 4 "predicts the future" rather than "interpolating the past." That drastically changes how it works, what it requires in terms of hardware capabilities, and what we can expect in terms of latency.
There's still work being done without new user input, though the Reflex 2 warping functionality may at least partially mitigate that. But based on previously rendered frames, motion vectors, and other data, DLSS 4 will generate new frames to create a smoother experience. It also has new hardware requirements that help it maintain better frame pacing.
We haven't been able to try it out in person, yet, so we can't say for certain how DLSS 4 multi frame generation compares to DLSS 3 framegen and normal rendering. It sounds as though there's still a latency penalty, but how much that will be felt — and particularly how it feels on different tiers of RTX 50-series GPUs — is an important consideration.
We know from DLSS 3 that if you're only getting a generated FPS of 40 as an example, it can feel very sluggish and laggy, even if it looks reasonably smooth. That's because the user input gets sampled at 20 FPS. With DLSS 4, potentially that means you could have a generated framerate of 80 FPS with user sampling of 20 FPS. Or to put it in different terms, we've generally felt that you need sampling rates of at least 40–50 FPS for a game to feel responsive while using framegen.
With multi frame generation, that would mean we could potentially need generated framerates of 160–200 FPS for a similar experience. That might be great on a 240 Hz monitor and we'd love to see it, but by the same token multi frame generation on a 60 Hz or even 120 Hz monitor might not be that awesome.
(Image credit: Nvidia)Our other question was in regards to the neural textures and rendering that was shown. Nvidia showed some examples of memory use of 48MB for standard materials, and slashed that to 16MB with "RTX Neural Materials." But what exactly does that mean, and how will this impact the gaming experience? We were particularly interested in whether or not there was potential to help GPUs that don't have as much VRAM, things like the RTX 4060 with 8GB of memory.
Unfortunately, Jensen says these neural materials will require specific implementation by content creators. He said that Blackwell has new features that allow developers and artists to put shader code intermixed with neural rendering instructions into materials. Material descriptions have become quite complex, and describing them mathematically can be difficult. But Jensen also said that "AI can learn how to do that for us."
These new features won't be available on previous generation GPUs, as they don't have the required hardware capabilities to mix shader code with neural code. So to get the benefit, neural materials will require content side work and thus developers will need to specifically adopt these new features.
That means, for one, that if we do end up with an RTX 5060 8GB card as an example, for a whole host of existing games, the 8GB of VRAM could still prove to be a limiting factor. It also means RTX 4060 and 4060 Ti with 8GB won't get a new lease on life thanks to neural rendering. Or at least, that how we interpret things. But maybe an AI network can learn how to do some of these things for us in the future.
0 Comments
Post a Comment