Gao Duan, Mu Haoyuan, Xu Kun
IEEE Trans Vis Comput Graph. 2023 Dec;29(12):5325-5341. doi: 10.1109/TVCG.2022.3209963. Epub 2023 Nov 10.
We propose neural global illumination, a novel method for fast rendering full global illumination in static scenes with dynamic viewpoint and area lighting. The key idea of our method is to utilize a deep rendering network to model the complex mapping from each shading point to global illumination. To efficiently learn the mapping, we propose a neural-network-friendly input representation including attributes of each shading point, viewpoint information, and a combinational lighting representation that enables high-quality fitting with a compact neural network. To synthesize high-frequency global illumination effects, we transform the low-dimension input to higher-dimension space by positional encoding and model the rendering network as a deep fully-connected network. Besides, we feed a screen-space neural buffer to our rendering network to share global information between objects in the screen-space to each shading point. We have demonstrated our neural global illumination method in rendering a wide variety of scenes exhibiting complex and all-frequency global illumination effects such as multiple-bounce glossy interreflection, color bleeding, and caustics.
我们提出了神经全局光照,这是一种在具有动态视点和区域光照的静态场景中快速渲染完整全局光照的新方法。我们方法的关键思想是利用深度渲染网络对从每个着色点到全局光照的复杂映射进行建模。为了有效地学习这种映射,我们提出了一种对神经网络友好的输入表示,包括每个着色点的属性、视点信息以及一种组合光照表示,它能够通过紧凑的神经网络实现高质量拟合。为了合成高频全局光照效果,我们通过位置编码将低维输入转换到高维空间,并将渲染网络建模为深度全连接网络。此外,我们将屏幕空间神经缓冲区输入到渲染网络中,以便在屏幕空间中的对象之间向每个着色点共享全局信息。我们已经在渲染各种展现复杂和全频率全局光照效果(如多次反射光泽漫反射、颜色渗漏和焦散)的场景中展示了我们的神经全局光照方法。