• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

NeRFPlayer:一种具有分解神经辐射场的可流式动态场景表示。

NeRFPlayer: A Streamable Dynamic Scene Representation with Decomposed Neural Radiance Fields.

作者信息

Song Liangchen, Chen Anpei, Li Zhong, Chen Zhang, Chen Lele, Yuan Junsong, Xu Yi, Geiger Andreas

出版信息

IEEE Trans Vis Comput Graph. 2023 May;29(5):2732-2742. doi: 10.1109/TVCG.2023.3247082. Epub 2023 Mar 29.

DOI:10.1109/TVCG.2023.3247082
PMID:37027699
Abstract

Visually exploring in a real-world 4D spatiotemporal space freely in VR has been a long-term quest. The task is especially appealing when only a few or even single RGB cameras are used for capturing the dynamic scene. To this end, we present an efficient framework capable of fast reconstruction, compact modeling, and streamable rendering. First, we propose to decompose the 4D spatiotemporal space according to temporal characteristics. Points in the 4D space are associated with probabilities of belonging to three categories: static, deforming, and new areas. Each area is represented and regularized by a separate neural field. Second, we propose a hybrid representations based feature streaming scheme for efficiently modeling the neural fields. Our approach, coined NeRFPlayer, is evaluated on dynamic scenes captured by single hand-held cameras and multi-camera arrays, achieving comparable or superior rendering performance in terms of quality and speed comparable to recent state-of-the-art methods, achieving reconstruction in 10 seconds per frame and interactive rendering. Project website: https://bit.ly/nerfplayer.

摘要

在虚拟现实(VR)中自由地在真实世界的4D时空空间中进行视觉探索一直是一个长期追求的目标。当仅使用少数甚至单个RGB相机来捕捉动态场景时,这项任务尤其具有吸引力。为此,我们提出了一个高效的框架,该框架能够进行快速重建、紧凑建模和可流式渲染。首先,我们建议根据时间特征对4D时空空间进行分解。4D空间中的点与属于三类的概率相关联:静态、变形和新区域。每个区域由一个单独的神经场表示并进行正则化。其次,我们提出了一种基于混合表示的特征流方案,用于有效地对神经场进行建模。我们的方法名为NeRFPlayer,在由单手持相机和多相机阵列捕获的动态场景上进行了评估,在质量和速度方面实现了与最近的最新方法相当或更优的渲染性能,实现了每秒10帧的重建和交互式渲染。项目网站:https://bit.ly/nerfplayer 。

相似文献

1
NeRFPlayer: A Streamable Dynamic Scene Representation with Decomposed Neural Radiance Fields.NeRFPlayer:一种具有分解神经辐射场的可流式动态场景表示。
IEEE Trans Vis Comput Graph. 2023 May;29(5):2732-2742. doi: 10.1109/TVCG.2023.3247082. Epub 2023 Mar 29.
2
NeRF-OR: neural radiance fields for operating room scene reconstruction from sparse-view RGB-D videos.NeRF-OR:用于从稀疏视图RGB-D视频重建手术室场景的神经辐射场
Int J Comput Assist Radiol Surg. 2025 Jan;20(1):147-156. doi: 10.1007/s11548-024-03261-5. Epub 2024 Sep 13.
3
Scene-Aware Foveated Neural Radiance Fields.场景感知的中央凹神经辐射场
IEEE Trans Vis Comput Graph. 2025 Sep;31(9):5039-5054. doi: 10.1109/TVCG.2024.3429416.
4
RISE-Editing: Rotation-invariant neural point fields with interactive segmentation for fine-grained and efficient editing.RISE编辑:具有交互式分割的旋转不变神经点场,用于细粒度和高效编辑。
Neural Netw. 2025 Jul;187:107304. doi: 10.1016/j.neunet.2025.107304. Epub 2025 Feb 28.
5
Foundation Model-Guided Gaussian Splatting for 4D Reconstruction of Deformable Tissues.基于基础模型引导的高斯喷溅法用于可变形组织的4D重建
IEEE Trans Med Imaging. 2025 Jun;44(6):2672-2682. doi: 10.1109/TMI.2025.3545183.
6
VPRF: Visual Perceptual Radiance Fields for Foveated Image Synthesis.VPRF:用于中央凹图像合成的视觉感知辐射场
IEEE Trans Vis Comput Graph. 2024 Nov;30(11):7183-7192. doi: 10.1109/TVCG.2024.3456184. Epub 2024 Oct 10.
7
Fast Non-Rigid Radiance Fields from Monocularized Data.来自单目数据的快速非刚性辐射场
IEEE Trans Vis Comput Graph. 2024 Feb 20;PP. doi: 10.1109/TVCG.2024.3367431.
8
Cascaded and Generalizable Neural Radiance Fields for Fast View Synthesis.用于快速视图合成的级联可泛化神经辐射场
IEEE Trans Pattern Anal Mach Intell. 2024 May;46(5):2758-2769. doi: 10.1109/TPAMI.2023.3335311. Epub 2024 Apr 3.
9
Enhancing View Synthesis with Depth-Guided Neural Radiance Fields and Improved Depth Completion.利用深度引导神经辐射场和改进的深度补全增强视图合成
Sensors (Basel). 2024 Mar 16;24(6):1919. doi: 10.3390/s24061919.
10
Dynamic surface reconstruction in robot-assisted minimally invasive surgery based on neural radiance fields.基于神经辐射场的机器人辅助微创手术中的动态表面重建。
Int J Comput Assist Radiol Surg. 2024 Mar;19(3):519-530. doi: 10.1007/s11548-023-03016-8. Epub 2023 Sep 28.

引用本文的文献

1
A Survey of 3D Reconstruction: The Evolution from Multi-View Geometry to NeRF and 3DGS.3D重建综述:从多视图几何到神经辐射场和3D生成式对抗网络的演变
Sensors (Basel). 2025 Sep 15;25(18):5748. doi: 10.3390/s25185748.
2
MBS-NeRF: reconstruction of sharp neural radiance fields from motion-blurred sparse images.MBS-NeRF:从运动模糊的稀疏图像重建清晰的神经辐射场
Sci Rep. 2025 Feb 12;15(1):5275. doi: 10.1038/s41598-025-88614-z.