Read more about Cyberpunk 2077➜ https://cyberpunk2077.mgn.tv
Cyberpunk 2077 2.0 is coming soon – the first showcase for DLSS 3.5 ray reconstruction, integrated into a full DLSS package including super resolution and frame generation, all combining to produce a state-of-the-art visual experience. In this roundtable discussion, we discuss how ray reconstruction works, how it was developed and its applications outside of path-traced games. We also talk about the evolution of the original DLSS, the success of DLSS in the modding community and the future of machine learning in PC graphics.
Many thanks to all participants in this roundtable chat: Bryan Catanzaro, Vice President Applied Deep Learning Research at Nvidia, Jakub Knapik VP Art and Global Art Director at CD Projekt RED, GeForce evangelist Jacob Freeman and Pedro Valadas of the PCMR sub-Reddit.
00:00:00 Introduction
00:01:10 When did the DLSS 3.5 Ray Reconstruction project start and why?
00:04:16 How did you get DLSS 3.5 Ray Reconstruction up and running?
00:06:17 What was it like to integrate DLSS 3.5 for Cyberpunk 2077?
00:10:21 What are the new game inputs for DLSS 3.5?
00:11:25 Can DLSS 3.5 be used for hybrid ray tracing titles and not just path traced ones?
00:12:41 What is the target performance budget for DLSS 3.5?
00:14:10 Is DLSS a crutch for bad performance optimisation in PC games?
00:20:19 What makes machine learning specifically useful for denoising?
00:24:00 Why is DLSS naming kind of confusing?
00:27:03 What did the new denoising enable for Cyberpunk 2077’s graphical vision?
00:32:10 Will Nvidia still focus on performance *without* DLSS at native resolutions?
00:38:26 What prompted the change internally at Nvidia to move away from DLSS 1.0 and pursue DLSS 2.0?
00:43:43 What do you think about DLSS mods for games that lack DLSS?
00:49:52 Where can machine learning go in the future for games beyond DLSS 3.5?
source
And still Ray Tracing and DLSS usually only gets like around 1 minute or so of attention from GPU Reviewers and comments like "if u are interested in such things", is almost like they where trying to say it doesn't matter that much.
Awesome discussion. Super insightful and thought provoking! Looking forward to the new DLSS features in Cyberpunk (in addition to the expansion / 2.0 of course), and excited about the growth and implementation of DLSS (and AI tools in general) in other games going forward. Cheers Alex / DF 🍻
Well obv I understand why Bryan called all frames as “fake frames”, but that’s a bit much. No matter how excited he is and we all are about path tracing, it wud mean all games till now without path tracing are just fake, because all games are literally made of those said frames!!
Nevertheless, this one’s an amazing and informative video. Good work Alex & DF.
Looking forward to such informative and in-depth videos in the future, lest it be made by an AI, LoL.
I think DLSS 3.5 has a lot potential for cloud gaming, for helping on the latency as such.
Catanzaro is a genius! Is he of Italian origin?
The one thing I always want with DLSS (or any reconstuction) is clarity in the options. so Display/Output Resolution (4k/1440p or better the actual monitor resolution), Render/Internal Resolution (what ever the internal res is), the details being obfuscated behind Quality/Balanced/Performance/Ultra Performance has always been annoying to me. I love PC gaming from the granularity it offers, but not having direct control over some of those settings is annoying.
Also would love to see developers on PC looking at implementing things like the 40fps Triple Buffered 120hz VRR output that is being done on the PS5 by Insomniac, a cool solution to squeeze even more fidelity/performance options.
Would also love to see some sort of implementation of reverse/inverse settings options, where the user sets "I would like locked 4k, and locked 60fps" and the engine/ai of the settings benchmarks the system and gets the settings to where those targets are achievable…
I have an Nvidia 4070TI, great framerates with FG on, but I get terrible tearing. And Vsync is disabled when using it, so I cannot use it in games.
Native – forever👍 Any upscalers – must die👎
Any picture with upscalers will always remain fake, no one and nothing can ever change that. This is simply a fact that big greedy corporations like Nvidia cannot accept.
Game became an ad for nvidia. Sad.
I wonder if they've changed the graphics for consoles at all
Really good selection of people with different perspectives in this round table. Really love those nitty–gritty technical unscripted conversations!
Bryan Katanzaro got that nvidia hairworks
Does anyone have a link to that demo where everything is rendered by neural net?
Generally I like the interview,
The issue I have with the Frame Generation is that it is not calculation frames. This has a impact on control feel of the game it is running on, as it is not calculating the control inputs in these generated frames, and especially when starting from low fps natively. So comparing it to e.g. LOD is not a good comparison, cuz that does not effect the control feel aspect of the game, and FG does do that, so here FG is very different then other technologies.
I like the tech and it is going forward/getting better, but FG as a FPS booster is not great in my opinion as it is right now, it is okey in a single player game like CP2077 but for multiplayer/competitive games it is not a good thing, as control input is very crucial, and e.g. LOD does not make the same bad impact in control feel.
But great video.
I think that if they're used effectively to make good games, upscaling tools shouldn't be considered a 'crutch' that diminishes the work of the developers. Musicians aren't expected to make their own instruments, but video game studios today are expected to write and optimize their own code. Eventually, tools like game engine software and AI upscaling will become advanced enough that bespoke engines may be quickly and efficiently designed for studios so they will be able to put more of their energy into designing assets and building their worlds.
Tetris CD-i Level 0. Let's go!
all this just to look bad next to fortnite is hilarious
I hope they aren't fully abandoning the Cyberpunk engine for Unreal, it is such a good engine, and I'm not looking forward to the performance issues and shader stutters in future games.
Fantastic video and conversation.
Thank you Alex, DF team, and all guests, that was Awesome and very informative video.
*Bryan Talks*
I understood some of those words….
Even DLSS1 looks good to me. Zooming in I see the issues of course, but at my normal viewing distance it’s not an issue.
This feels extremely scripted …
surprised to see Theo Von had suck a geeky sidehustle at Nvidia
Its not a great time to be a pc gamer because i need a million doller nuclear pc to run games😂
I could see a future where ai deepfake postprocessing that would take an input frame, say csgo quality and turn it into photo realistic.
"Native Resolution is out"
Whatever other statement should come from Nvidia Marketing.
Such a great interview! quality content, loved it
With regards to DLSS as a crutch (and in general, image reconstruction for upsampling) I think a better way of putting it would be: when it was first deployed it was a user choice feature, an optional way to gain more performance at (perhaps only sometimes) a small quality drop off. But the fear is that it will no longer be optional and the choice will be taken away from the user as it will be necessary for the game to achieve minimum acceptable performance. IE. Before it was a BONUS up to the user, and soon it may be REQUIRED.
i beg of you,please do a FULL recommended settings for 3080 series cards. ive being doing benches since release of the update…Pain
3:45 it would be awesome to see Minecraft RTX get an update with upcomming 1.20 for DLSS 3.5 and fix all these issues.
I hope it is just YT, but it would impact both in the comparisons. In the direct comparisons of PT and Rasterization, PT is blurry as heck. "No tradeoffs" my butt. Even considering YT compression and this sadly only being 1080p video, the rasterization looks waaaay more crisp. Yes, the lighting is better with PT, but I'd take a crisp image over blurry. That is something I can't really compromise on and makes this whole discussion really disappointing and not exciting for me if this is or future of games.
Fantastic, informative and full discussion about the absolute bleeding edge of graphics right now. I felt like the balance of participants (engineers, users, etc) definitely helped make it so as well. Just a great piece of content, thanks!
If they claim characters are important, how does V have no reflection in the gameworld? LMAO
being PCMR moderator already feels like its bias towards PC
I believe strongly that most of the innovation from ray reconstruction, resolution reconstruction that Nvidia engineers have done falls outside of the scope of the Neural Network tech. Sure, they can use that tech as a crutch for the final image, but I have a sense that you could have worked around the Neural Net process to get a similar enough output, that it would not justify the dedicated compute hardware.
Ref AMD's FSR – no neural networks, competitive result.
Neural Net stuff I still consider marketting fluff
It's funny how they laugh bout Starfield optimization, complaining to developers, when they are using DSLL 3.5 to optimize 3 years old game, that is now finally finish.
"It's a bethesda game" RIP
Regarding the Question why Neural Networks are good for denoising. He makes it sound magical a bit while he actually makes a good point. NNs are what we call function approximators, basically what you have learned in calculus in school a NN can learn too and using massively parallel and specialised hardware it can do this efficiently.
What NNs do is only learning the perfect settings of the knobs he is talking about and it learns this so well because it can try million of times a second due to advanced silicon.
How we go beyond that is actuall taking everything we hope has an impact put it into the model and use that it can tune the knops so fast that it can perform all those tedeus tasks much faster than us.
Also there are ways to determine which inputs (normal maps and so on) are necessary or increase the models performance (however in NNs this itself is a topic of ever evolving research)
imagine your renderer being so incredibly bad that you need to train an ai denoiser to vasaline smear it all together. can we go back to cubemaps and baked lighting please?