Nvidia has been training DLSS model non-stop for six years straight
Nvidia's dedication to DLSS goes far beyond what many of us think. The company has revealed a surprising secret weapon in its quest for visual fidelity: a massive supercomputer dedicated to training and refining DLSS models.
This revelation came during RTX Blackwell Editor's Day at CES 2025 (via PCGamer), where Brian Catanzaro, VP of applied deep learning research, explained the development process behind DLSS 4. Catanzaro said that Nvidia has been using a dedicated supercomputer packed with thousands of Nvidia GPUs to train and improve DLSS continuously for the past six years, 24/7, 365 days a year.
While Nvidia's use of supercomputers for AI research is not surprising, the scale and dedication of this specific infrastructure dedicated to DLSS are remarkable. This dedication to ongoing improvement has undoubtedly contributed to DLSS's continuous improvement across its various iterations, from its initial release to the latest DLSS 4, which introduces a new transformer model that promises even better image quality.
Catanzaro also shed light on the training process, emphasising the importance of analysing failures. As he explained, when the DLSS model fails, it appears as ghosting, flickering, or blurriness. These failures are identified in numerous games, and the system tries to understand why the model makes incorrect decisions about image creation. Nvidia then expands its training dataset with examples of ideal visuals and cases where DLSS faced difficulties, retraining the model and testing it across hundreds of games to improve DLSS. It looks like a meticulous and resource-intensive process, but the results speak for themselves.
KitGuru says: As AI further develops, do you think the evolution of DLSS will also accelerate?
The post Nvidia has been training DLSS model non-stop for six years straight first appeared on KitGuru.