Suppr超能文献

通过在重聚焦图像域中训练深度网络进行光场合成

Light Field Synthesis by Training Deep Network in the Refocused Image Domain.

作者信息

Liu Chang-Le, Shih Kuang-Tsu, Huang Jiun-Woei, Chen Homer H

出版信息

IEEE Trans Image Process. 2020 May 11. doi: 10.1109/TIP.2020.2992354.

Abstract

Light field imaging, which captures spatial-angular information of light incident on image sensors, enables many interesting applications such as image refocusing and augmented reality. However, due to the limited sensor resolution, a trade-off exists between the spatial and angular resolutions. To increase the angular resolution, view synthesis techniques have been adopted to generate new views from existing views. However, traditional learning-based view synthesis mainly considers the image quality of each view of the light field and neglects the quality of the refocused images. In this paper, we propose a new loss function called refocused image error (RIE) to address the issue. The main idea is that the image quality of the synthesized light field should be optimized in the refocused image domain because it is where the light field is viewed. We analyze the behavior of RIE in the spectral domain and test the performance of our approach against previous approaches on both real (INRIA) and software-rendered (HCI) light field datasets using objective assessment metrics such as MSE, MAE, PSNR, SSIM, and GMSD. Experimental results show that the light field generated by our method results in better refocused images than previous methods.

摘要

光场成像能够捕捉入射到图像传感器上的光的空间角信息,从而实现许多有趣的应用,如图像重聚焦和增强现实。然而,由于传感器分辨率有限,空间分辨率和角分辨率之间存在权衡。为了提高角分辨率,人们采用视图合成技术从现有视图生成新视图。然而,传统的基于学习的视图合成主要考虑光场各视图的图像质量,而忽略了重聚焦图像的质量。在本文中,我们提出了一种名为重聚焦图像误差(RIE)的新损失函数来解决这个问题。主要思想是,合成光场的图像质量应该在重聚焦图像域中进行优化,因为这是光场被观察的地方。我们在频域分析了RIE的行为,并使用MSE、MAE、PSNR、SSIM和GMSD等客观评估指标,在真实(INRIA)和软件渲染(HCI)光场数据集上,将我们的方法与先前方法的性能进行了对比测试。实验结果表明,我们的方法生成的光场能产生比先前方法更好的重聚焦图像。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验