Liu Yudan, Yang Xiaomin, Zhang Rongzhu, Albertini Marcelo Keese, Celik Turgay, Jeon Gwanggil
College of Electronics and Information Engineering, Sichuan University, Chengdu 610064, China.
Department of Computer Science, Federal University of Uberlandia, Uberlandia, MG 38408-100, Brazil.
Entropy (Basel). 2020 Jan 18;22(1):118. doi: 10.3390/e22010118.
Image fusion is a very practical technology that can be applied in many fields, such as medicine, remote sensing and surveillance. An image fusion method using multi-scale decomposition and joint sparse representation is introduced in this paper. First, joint sparse representation is applied to decompose two source images into a common image and two innovation images. Second, two initial weight maps are generated by filtering the two source images separately. Final weight maps are obtained by joint bilateral filtering according to the initial weight maps. Then, the multi-scale decomposition of the innovation images is performed through the rolling guide filter. Finally, the final weight maps are used to generate the fused innovation image. The fused innovation image and the common image are combined to generate the ultimate fused image. The experimental results show that our method's average metrics are: mutual information ( M I )-5.3377, feature mutual information ( F M I )-0.5600, normalized weighted edge preservation value ( Q A B / F )-0.6978 and nonlinear correlation information entropy ( N C I E )-0.8226. Our method can achieve better performance compared to the state-of-the-art methods in visual perception and objective quantification.
图像融合是一项非常实用的技术,可应用于许多领域,如医学、遥感和监测。本文介绍了一种利用多尺度分解和联合稀疏表示的图像融合方法。首先,应用联合稀疏表示将两幅源图像分解为一幅公共图像和两幅创新图像。其次,通过分别对两幅源图像进行滤波生成两个初始权重图。根据初始权重图通过联合双边滤波获得最终权重图。然后,通过滚动引导滤波器对创新图像进行多尺度分解。最后,利用最终权重图生成融合后的创新图像。将融合后的创新图像与公共图像相结合生成最终的融合图像。实验结果表明,我们方法的平均指标为:互信息(MI)-5.3377、特征互信息(FMI)-0.5600、归一化加权边缘保留值(QAB/F)-0.6978和非线性相关信息熵(NCIE)-0.8226。与现有最先进方法相比,我们的方法在视觉感知和客观量化方面能够实现更好的性能。