NVIDIA has always been at the forefront of AI research, such as when it created Deep Learning Super Sampling (DLSS). However, image reconstruction and upscaling are only one of many research fields where neural graphics techniques are applicable.
At the upcoming SIGGRAPH 2023, which will take place between August 6th and 10th in Los Angeles, NVIDIA will present 20 papers on generative AI and neural graphics. For those undaunted by intriguing yet very technical reads, all of the publications are listed on this page.
In this article, I'll go through some of the most interesting techniques outlined in the new NVIDIA papers for game developers. By far the most readily applicable is the neural compression technique for material textures described in Random-Access Neural Compression of Material Textures (Karthik Vaidyanathan, Marco Salvi, Bartlomiej Wronski, Tomas Akenine‑Möller, Pontus Ebelin, Aaron Lefohn).
The team of NVIDIA engineers posited the need to reduce texture storage requirements at a time when assets are of extremely high quality but also ask for increasingly large amounts of disk space. To achieve this goal, they've combined GPU textures compression with neural compression techniques.
Using this approach we enable low-bitrate compression, unlocking two additional levels of detail (or 16× more texels) with similar storage requirements as commonly used texture compression techniques. In practical terms, this allows a viewer to get very close to an object before losing significant texture detail. Our main contributions are:
• A novel approach to texture compression that exploits redundancies spatially, across mipmap levels, and across different material channels. By optimizing for reduced distortion at a low bitrate, we can compress two more levels of details in the same storage as block-compressed textures. The resulting texture quality at such aggressively low bitrates is better than or comparable to recent image compression standards like AVIF and JPEG XL, which are not designed for real-time decompression with random access.
• A novel low-cost decoder architecture that is optimized specifically for each material. This architecture enables real-time performance for random access and can be integrated into material shader functions, such as filtering, to facilitate on-demand decompression.
• A highly optimized implementation of our compressor, with fused backpropagation, enabling practical per-material optimization with resolutions up to 8192 × 8192 (8k). Our compressor can process a 9-channel, 4k material texture set in 1-15 minutes on an NVIDIA RTX 4090 GPU, depending on the desired quality level.
As shown in the video demonstration above, the improvement in material texture detail is significant. The memory budget is nearly the same as regular BCx textures (3.6MB vs. 3.3MB), while the rendering cost of the GPU-based decompression at native 4K resolution is over twice higher (1.15ms vs. 0.49ms). However, NVIDIA engineers believe this overhead cost would be smaller in realistic cases due to the GPU's latency-hiding capabilities.
Another impressive SIGGRAPH 2023 paper titled Interactive Hair Simulation on the GPU Using ADMM shows neural physics enabling an incredibly realistic simulation of tens of thousands of hair strands. Essentially, the neural network learns from an AI technique how the hair is supposed to move.
That said, the NVIDIA researcher who produced this paper, Gilles Daviet, pointed out that it is unclear how this method would fare in large-scale scenes, so it may not be very applicable to games yet.
As a demonstration that these papers very often develop into usable solutions, NVIDIA confirmed that its NeuralVDB technique is now available in early access, delivering AI and GPU optimization for up to 100x lower memory footprint compared to OpenVDB when rendering volumetric clouds, fire, water, or smoke.
0 Comments