• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于快速视图合成的级联可泛化神经辐射场

Cascaded and Generalizable Neural Radiance Fields for Fast View Synthesis.

作者信息

Nguyen-Ha Phong, Huynh Lam, Rahtu Esa, Matas Jiri, Heikkila Janne

出版信息

IEEE Trans Pattern Anal Mach Intell. 2024 May;46(5):2758-2769. doi: 10.1109/TPAMI.2023.3335311. Epub 2024 Apr 3.

DOI:10.1109/TPAMI.2023.3335311
PMID:37999969
Abstract

We present CG-NeRF, a cascade and generalizable neural radiance fields method for view synthesis. Recent generalizing view synthesis methods can render high-quality novel views using a set of nearby input views. However, the rendering speed is still slow due to the nature of uniformly-point sampling of neural radiance fields. Existing scene-specific methods can train and render novel views efficiently but can not generalize to unseen data. Our approach addresses the problems of fast and generalizing view synthesis by proposing two novel modules: a coarse radiance fields predictor and a convolutional-based neural renderer. This architecture infers consistent scene geometry based on the implicit neural fields and renders new views efficiently using a single GPU. We first train CG-NeRF on multiple 3D scenes of the DTU dataset, and the network can produce high-quality and accurate novel views on unseen real and synthetic data using only photometric losses. Moreover, our method can leverage a denser set of reference images of a single scene to produce accurate novel views without relying on additional explicit representations and still maintains the high-speed rendering of the pre-trained model. Experimental results show that CG-NeRF outperforms state-of-the-art generalizable neural rendering methods on various synthetic and real datasets.

摘要

我们提出了CG-NeRF,一种用于视图合成的级联且可泛化的神经辐射场方法。最近的可泛化视图合成方法可以使用一组附近的输入视图渲染高质量的新视图。然而,由于神经辐射场均匀点采样的性质,渲染速度仍然很慢。现有的特定场景方法可以高效地训练和渲染新视图,但不能泛化到未见数据。我们的方法通过提出两个新颖的模块来解决快速和可泛化视图合成的问题:一个粗略辐射场预测器和一个基于卷积的神经渲染器。这种架构基于隐式神经场推断一致的场景几何,并使用单个GPU高效地渲染新视图。我们首先在DTU数据集的多个3D场景上训练CG-NeRF,该网络仅使用光度损失就能在未见的真实和合成数据上生成高质量且准确的新视图。此外,我们的方法可以利用单个场景的更密集参考图像集来生成准确的新视图,而无需依赖额外的显式表示,并且仍然保持预训练模型的高速渲染。实验结果表明,CG-NeRF在各种合成和真实数据集上优于现有的可泛化神经渲染方法。

相似文献

1
Cascaded and Generalizable Neural Radiance Fields for Fast View Synthesis.用于快速视图合成的级联可泛化神经辐射场
IEEE Trans Pattern Anal Mach Intell. 2024 May;46(5):2758-2769. doi: 10.1109/TPAMI.2023.3335311. Epub 2024 Apr 3.
2
Enhancing View Synthesis with Depth-Guided Neural Radiance Fields and Improved Depth Completion.利用深度引导神经辐射场和改进的深度补全增强视图合成
Sensors (Basel). 2024 Mar 16;24(6):1919. doi: 10.3390/s24061919.
3
MPS-NeRF: Generalizable 3D Human Rendering From Multiview Images.MPS-NeRF:基于多视图图像的可泛化3D人体渲染
IEEE Trans Pattern Anal Mach Intell. 2025 Aug;47(8):6110-6121. doi: 10.1109/TPAMI.2022.3205910.
4
Baking Neural Radiance Fields for Real-Time View Synthesis.用于实时视图合成的烘焙神经辐射场
IEEE Trans Pattern Anal Mach Intell. 2025 May;47(5):3310-3321. doi: 10.1109/TPAMI.2024.3381001. Epub 2025 Apr 8.
5
NeRF-OR: neural radiance fields for operating room scene reconstruction from sparse-view RGB-D videos.NeRF-OR:用于从稀疏视图RGB-D视频重建手术室场景的神经辐射场
Int J Comput Assist Radiol Surg. 2025 Jan;20(1):147-156. doi: 10.1007/s11548-024-03261-5. Epub 2024 Sep 13.
6
Neural radiance fields-based multi-view endoscopic scene reconstruction for surgical simulation.基于神经辐射场的多视角内窥场景重建用于手术模拟。
Int J Comput Assist Radiol Surg. 2024 May;19(5):951-960. doi: 10.1007/s11548-024-03080-8. Epub 2024 Feb 27.
7
Learning Spherical Radiance Field for Efficient 360° Unbounded Novel View Synthesis.学习球形辐射场以实现高效的360°无边界新视图合成。
IEEE Trans Image Process. 2024;33:3722-3734. doi: 10.1109/TIP.2024.3409052. Epub 2024 Jun 13.
8
UPST-NeRF: Universal Photorealistic Style Transfer of Neural Radiance Fields for 3D Scene.UPST-NeRF:用于3D场景的神经辐射场通用逼真风格迁移
IEEE Trans Vis Comput Graph. 2025 Apr;31(4):2045-2057. doi: 10.1109/TVCG.2024.3378692. Epub 2025 Feb 27.
9
Neural Radiance Fields From Sparse RGB-D Images for High-Quality View Synthesis.基于稀疏 RGB-D 图像的神经辐射场进行高质量视图合成。
IEEE Trans Pattern Anal Mach Intell. 2023 Jul;45(7):8713-8728. doi: 10.1109/TPAMI.2022.3232502. Epub 2023 Jun 5.
10
StructNeRF: Neural Radiance Fields for Indoor Scenes With Structural Hints.StructNeRF:具有结构线索的室内场景神经辐射场
IEEE Trans Pattern Anal Mach Intell. 2023 Dec;45(12):15694-15705. doi: 10.1109/TPAMI.2023.3305295. Epub 2023 Nov 3.