Suppr超能文献

通过可微屏幕空间渲染实现增强现实的实时光照估计

Real-Time Lighting Estimation for Augmented Reality via Differentiable Screen-Space Rendering.

作者信息

Liu Celong, Wang Lingyu, Li Zhong, Quan Shuxue, Xu Yi

出版信息

IEEE Trans Vis Comput Graph. 2023 Apr;29(4):2132-2145. doi: 10.1109/TVCG.2022.3141943. Epub 2023 Feb 28.

Abstract

Augmented Reality (AR) applications aim to provide realistic blending between the real-world and virtual objects. One of the important factors for realistic AR is the correct lighting estimation. In this article, we present a method that estimates the real-world lighting condition from a single image in real time, using information from an optional support plane provided by advanced AR frameworks (e.g., ARCore, ARKit, etc.). By analyzing the visual appearance of the real scene, our algorithm can predict the lighting condition from the input RGB photo. In the first stage, we use a deep neural network to decompose the scene into several components: lighting, normal, and Bidirectional Reflectance Distribution Function (BRDF). Then we introduce differentiable screen-space rendering, a novel approach to providing the supervisory signal for regressing lighting, normal, and BRDF jointly. We recover the most plausible real-world lighting condition using Spherical Harmonics and the main directional lighting. Through a variety of experimental results, we demonstrate that our method can provide improved results than prior works quantitatively and qualitatively, and it can enhance the real-time AR experiences.

摘要

增强现实(AR)应用旨在在现实世界和虚拟物体之间实现逼真的融合。实现逼真AR的一个重要因素是正确的光照估计。在本文中,我们提出了一种方法,该方法利用先进AR框架(如ARCore、ARKit等)提供的可选支撑平面中的信息,实时从单张图像估计现实世界的光照条件。通过分析真实场景的视觉外观,我们的算法可以从输入的RGB照片预测光照条件。在第一阶段,我们使用深度神经网络将场景分解为几个组件:光照、法线和双向反射分布函数(BRDF)。然后,我们引入可微屏幕空间渲染,这是一种为联合回归光照、法线和BRDF提供监督信号的新颖方法。我们使用球谐函数和主要方向光照来恢复最合理的现实世界光照条件。通过各种实验结果,我们证明我们的方法在定量和定性方面都能提供比先前工作更好的结果,并且可以增强实时AR体验。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验