Suppr超能文献

基于灵活采样和几何感知融合的深度粗到细密集光场重建

Deep Coarse-to-Fine Dense Light Field Reconstruction With Flexible Sampling and Geometry-Aware Fusion.

作者信息

Jin Jing, Hou Junhui, Chen Jie, Zeng Huanqiang, Kwong Sam, Yu Jingyi

出版信息

IEEE Trans Pattern Anal Mach Intell. 2022 Apr;44(4):1819-1836. doi: 10.1109/TPAMI.2020.3026039. Epub 2022 Mar 4.

Abstract

A densely-sampled light field (LF) is highly desirable in various applications, such as 3-D reconstruction, post-capture refocusing and virtual reality. However, it is costly to acquire such data. Although many computational methods have been proposed to reconstruct a densely-sampled LF from a sparsely-sampled one, they still suffer from either low reconstruction quality, low computational efficiency, or the restriction on the regularity of the sampling pattern. To this end, we propose a novel learning-based method, which accepts sparsely-sampled LFs with irregular structures, and produces densely-sampled LFs with arbitrary angular resolution accurately and efficiently. We also propose a simple yet effective method for optimizing the sampling pattern. Our proposed method, an end-to-end trainable network, reconstructs a densely-sampled LF in a coarse-to-fine manner. Specifically, the coarse sub-aperture image (SAI) synthesis module first explores the scene geometry from an unstructured sparsely-sampled LF and leverages it to independently synthesize novel SAIs, in which a confidence-based blending strategy is proposed to fuse the information from different input SAIs, giving an intermediate densely-sampled LF. Then, the efficient LF refinement module learns the angular relationship within the intermediate result to recover the LF parallax structure. Comprehensive experimental evaluations demonstrate the superiority of our method on both real-world and synthetic LF images when compared with state-of-the-art methods. In addition, we illustrate the benefits and advantages of the proposed approach when applied in various LF-based applications, including image-based rendering and depth estimation enhancement. The code is available at https://github.com/jingjin25/LFASR-FS-GAF.

摘要

在诸如三维重建、拍摄后重新聚焦和虚拟现实等各种应用中,非常需要密集采样的光场(LF)。然而,获取此类数据成本很高。尽管已经提出了许多计算方法来从稀疏采样的光场重建密集采样的光场,但它们仍然存在重建质量低、计算效率低或对采样模式规律性有限制等问题。为此,我们提出了一种新颖的基于学习的方法,该方法接受具有不规则结构的稀疏采样光场,并准确高效地生成具有任意角分辨率的密集采样光场。我们还提出了一种简单而有效的方法来优化采样模式。我们提出的方法是一个端到端可训练的网络,以粗到细的方式重建密集采样的光场。具体来说,粗子孔径图像(SAI)合成模块首先从无结构的稀疏采样光场中探索场景几何结构,并利用它独立合成新的SAI,其中提出了一种基于置信度的融合策略来融合来自不同输入SAI的信息,得到一个中间密集采样光场。然后,高效的光场细化模块学习中间结果中的角度关系以恢复光场视差结构。综合实验评估表明,与现有方法相比,我们的方法在真实世界和合成光场图像上均具有优势。此外,我们还展示了所提出方法在应用于各种基于光场的应用(包括基于图像的渲染和深度估计增强)时的优点和优势。代码可在https://github.com/jingjin25/LFASR-FS-GAF获取。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验