NVIDIA announces DLSS Transformer Model at the CES 2025 exhibition. The frame scaling model improves ray reconstruction in DLSS 4, DLAA, and ultra-high resolution quality.
Previous versions of DLSS relied on CNN (Convolutional Neural Networks) technology to generate pixels based on information about local regions of the current and previous frames. However, NVIDIA reached the limit of their capabilities, and releasing new scaling profiles was no longer enough.
New DLSS technologies uses twice as many variables as previous scaling models and does not fixate on the local content of individual parts of the frame. Instead, the model processes the entire frame and evaluates the importance of each pixel, even in multiple frames next to each other. the company claims that the model has a deeper understanding of the scene and provides a more stable image by minimizing flicker, artifacts, and ghosting.
Transformer Model demonstrates better detail in motion and smoother edges. It also improves image quality with ray reconstruction, especially in scenes with complex lighting. In addition to image enhancement, the model improves efficiency. It can increase ray tracing performance by 30-50% while reducing latency from 3.25ms to 1ms (on RTX 5090).
The final version of DLSS Transformer Model was released after six months of work focused on finding bugs and improving stability. DLSS Super Resolution and Ray Reconstruction SDK 310.3.0 is available for developers on Github. NVIDIA will officially switch to the model and phase out CNN in the coming months.
Sources: VideoCardz, Tom’s Hardware