Ma Xiaole, Wang Zhihai, Hu Shaohai, Kan Shichao
School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044, China.
Beijing Key Laboratory of Advanced Information Science and Network Technology, Beijing 100044, China.
Entropy (Basel). 2022 Apr 21;24(5):582. doi: 10.3390/e24050582.
The methods based on the convolutional neural network have demonstrated its powerful information integration ability in image fusion. However, most of the existing methods based on neural networks are only applied to a part of the fusion process. In this paper, an end-to-end multi-focus image fusion method based on a multi-scale generative adversarial network (MsGAN) is proposed that makes full use of image features by a combination of multi-scale decomposition with a convolutional neural network. Extensive qualitative and quantitative experiments on the synthetic and Lytro datasets demonstrated the effectiveness and superiority of the proposed MsGAN compared to the state-of-the-art multi-focus image fusion methods.
基于卷积神经网络的方法在图像融合中已经展现出其强大的信息整合能力。然而,大多数现有的基于神经网络的方法仅应用于融合过程的一部分。本文提出了一种基于多尺度生成对抗网络(MsGAN)的端到端多聚焦图像融合方法,该方法通过多尺度分解与卷积神经网络相结合来充分利用图像特征。在合成数据集和Lytro数据集上进行的大量定性和定量实验表明,与现有最先进的多聚焦图像融合方法相比,所提出的MsGAN具有有效性和优越性。