News Software 07-17-2025 comment views icon

First NVIDIA Neural Texture Compression tests — up to 90% less video memory consumption, twice as fast rendering

author avatar

Andrii Rusanov

News writer

First NVIDIA Neural Texture Compression tests — up to 90% less video memory consumption, twice as fast rendering

An enthusiast tested NVIDIA’s Neural Texture Compression in combination with Microsoft DirectX Raytracing 1.2 — the result is fantastic.

X user @opinali managed to test neural texture compressionwith amazing results in terms of performance and VRAM savings (Osvaldo’s original thread has a lot of technical details). In rendering scenarios, he noted a significant drop in video memory usage with NTC in combination with DirectX Raytracing (DXR) 1.2. When Cooperative Vectors and NTC were enabled simultaneously in «Default» mode, the texture rendered at 2350 fps, compared to 1030 fps when they were completely disabled, a difference of almost 80%. Video memory usage also differed significantly.

Neural Compression uses neural networks to compress and decompress game textures, which reduces the size with minimal impact on quality. In addition, one of the key features of the Microsoft DXR 1.2 update is the inclusion of cooperative vectors, which allows GPU shaders to work together on operations with small matrices or vectors. The combination of NTCs over cooperative vectors forms a compression/decompression mechanism that works efficiently through DX12, which reduces video memory usage.

NTC currently only works on NVIDIA graphics cards, as alternatives from Intel and AMD do not yet have neural rendering tools. NTC mode is available with the latest 590.26 Preview drivers, which also enable NVIDIA Smooth Motion technology on RTX 40. If you combine these modes with each other and integrate them into games, you can get a real performance boost. To recap, AMD introduces new procedural tree generation, which also minimizes memory usage.

Source: Wccftech


Spelling error report

The following text will be sent to our editors: