Suppr超能文献

NeRF-In:利用RGB-D先验对预训练NeRF进行自由形式图像修复

NeRF-In: Free-Form Inpainting for Pretrained NeRF With RGB-D Priors.

作者信息

Shen I-Chao, Liu Hao-Kang, Chen Bing-Yu

出版信息

IEEE Comput Graph Appl. 2024 Mar-Apr;44(2):100-109. doi: 10.1109/MCG.2023.3336224. Epub 2024 Mar 25.

Abstract

Neural radiance field (NeRF) has emerged as a versatile scene representation. However, it is still unintuitive to edit a pretrained NeRF because the network parameters and the scene appearance are often not explicitly associated. In this article, we introduce the first framework that enables users to retouch undesired regions in a pretrained NeRF scene without accessing any training data and category-specific data prior. The user first draws a free-form mask to specify a region containing the unwanted objects over an arbitrary rendered view from the pretrained NeRF. Our framework transfers the user-drawn mask to other rendered views and estimates guiding color and depth images within transferred masked regions. Next, we formulate an optimization problem that jointly inpaints the image content in all masked regions by updating NeRF's parameters. We demonstrate our framework on diverse scenes and show it obtained visually plausible and structurally consistent results using less user manual efforts.

摘要

神经辐射场(NeRF)已成为一种通用的场景表示方法。然而,编辑预训练的NeRF仍然不太直观,因为网络参数和场景外观通常没有明确关联。在本文中,我们介绍了首个框架,该框架使用户能够在不预先访问任何训练数据和特定类别数据的情况下,对预训练的NeRF场景中的不需要区域进行修饰。用户首先绘制一个自由形式的掩码,以在预训练的NeRF的任意渲染视图上指定包含不需要对象的区域。我们的框架将用户绘制的掩码转移到其他渲染视图,并估计转移后的掩码区域内的引导颜色和深度图像。接下来,我们制定一个优化问题,通过更新NeRF的参数来联合修复所有掩码区域中的图像内容。我们在各种场景中展示了我们的框架,并表明它使用较少的用户手动操作就获得了视觉上合理且结构一致的结果。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验