Suppr超能文献

用于实时视图合成的烘焙神经辐射场

Baking Neural Radiance Fields for Real-Time View Synthesis.

作者信息

Hedman Peter, Srinivasan Pratul P, Mildenhall Ben, Reiser Christian, Barron Jonathan T, Debevec Paul

出版信息

IEEE Trans Pattern Anal Mach Intell. 2025 May;47(5):3310-3321. doi: 10.1109/TPAMI.2024.3381001. Epub 2025 Apr 8.

Abstract

Neural volumetric representations such as Neural Radiance Fields (NeRF) have emerged as a compelling technique for learning to represent 3D scenes from images with the goal of rendering photorealistic images of the scene from unobserved viewpoints. However, NeRF's computational requirements are prohibitive for real-time applications: rendering views from a trained NeRF requires querying a multilayer perceptron (MLP) hundreds of times per ray. We present a method to train a NeRF, then precompute and store (i.e., "bake") it as a novel representation called a Sparse Neural Radiance Grid (SNeRG) that enables real-time rendering on commodity hardware. To achieve this, we introduce 1) a reformulation of NeRF's architecture and 2) a sparse voxel grid representation with learned feature vectors. The resulting scene representation retains NeRF's ability to render fine geometric details and view-dependent appearance, is compact (averaging less than 90 MB per scene), and can be rendered in real-time (higher than 30 frames per second on a laptop GPU). Actual screen captures are shown in our video.

摘要

诸如神经辐射场(NeRF)之类的神经体素表示法已成为一种引人注目的技术,用于从图像中学习表示3D场景,目标是从未观察到的视角渲染出该场景的逼真图像。然而,NeRF的计算需求对于实时应用来说过高:从训练好的NeRF渲染视图需要每条光线数百次查询多层感知器(MLP)。我们提出了一种方法,先训练一个NeRF,然后将其预计算并存储(即“烘焙”)为一种名为稀疏神经辐射网格(SNeRG)的新颖表示形式,从而能够在商用硬件上进行实时渲染。为实现这一点,我们引入了1)NeRF架构的重新表述,以及2)带有学习到的特征向量的稀疏体素网格表示。由此产生的场景表示保留了NeRF渲染精细几何细节和视图相关外观的能力,紧凑(每个场景平均小于90MB),并且可以实时渲染(在笔记本电脑GPU上高于每秒30帧)。实际屏幕截图展示在我们的视频中。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验