Suppr超能文献

用于动态场景的深度多曝光图像融合

Deep Multi-Exposure Image Fusion for Dynamic Scenes.

作者信息

Tan Xiao, Chen Huaian, Zhang Rui, Wang Qihan, Kan Yan, Zheng Jinjin, Jin Yi, Chen Enhong

出版信息

IEEE Trans Image Process. 2023;32:5310-5325. doi: 10.1109/TIP.2023.3315123. Epub 2023 Sep 22.

Abstract

Recently, learning-based multi-exposure fusion (MEF) methods have made significant improvements. However, these methods mainly focus on static scenes and are prone to generate ghosting artifacts when tackling a more common scenario, i.e., the input images include motion, due to the lack of a benchmark dataset and solution for dynamic scenes. In this paper, we fill this gap by creating an MEF dataset of dynamic scenes, which contains multi-exposure image sequences and their corresponding high-quality reference images. To construct such a dataset, we propose a 'static-for-dynamic' strategy to obtain multi-exposure sequences with motions and their corresponding reference images. To the best of our knowledge, this is the first MEF dataset of dynamic scenes. Correspondingly, we propose a deep dynamic MEF (DDMEF) framework to reconstruct a ghost-free high-quality image from only two differently exposed images of a dynamic scene. DDMEF is achieved through two steps: pre-enhancement-based alignment and privilege-information-guided fusion. The former pre-enhances the input images before alignment, which helps to address the misalignments caused by the significant exposure difference. The latter introduces a privilege distillation scheme with an information attention transfer loss, which effectively improves the deghosting ability of the fusion network. Extensive qualitative and quantitative experimental results show that the proposed method outperforms state-of-the-art dynamic MEF methods. The source code and dataset are released at https://github.com/Tx000/Deep_dynamicMEF.

摘要

最近,基于学习的多曝光融合(MEF)方法取得了显著进展。然而,这些方法主要集中在静态场景,在处理更常见的场景(即输入图像包含运动)时容易产生重影伪像,这是由于缺乏动态场景的基准数据集和解决方案。在本文中,我们通过创建一个动态场景的MEF数据集来填补这一空白,该数据集包含多曝光图像序列及其相应的高质量参考图像。为了构建这样一个数据集,我们提出了一种“静态换动态”策略来获取带有运动的多曝光序列及其相应的参考图像。据我们所知,这是第一个动态场景的MEF数据集。相应地,我们提出了一种深度动态MEF(DDMEF)框架,仅从动态场景的两张不同曝光图像中重建无重影的高质量图像。DDMEF通过两个步骤实现:基于预增强的对齐和特权信息引导的融合。前者在对齐之前对输入图像进行预增强,这有助于解决由显著曝光差异引起的未对齐问题。后者引入了一种带有信息注意力转移损失的特权蒸馏方案,有效提高了融合网络的去重影能力。大量的定性和定量实验结果表明,所提出的方法优于现有的动态MEF方法。源代码和数据集可在https://github.com/Tx000/Deep_dynamicMEF上获取。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验