Huo Xing, Deng Yinping, Shao Kun
School of Mathematics, Hefei University of Technology, Hefei 230009, China.
School of Software, Hefei University of Technology, Hefei 230009, China.
Entropy (Basel). 2022 Nov 10;24(11):1633. doi: 10.3390/e24111633.
Existing fusion rules focus on retaining detailed information in the source image, but as the thermal radiation information in infrared images is mainly characterized by pixel intensity, these fusion rules are likely to result in reduced saliency of the target in the fused image. To address this problem, we propose an infrared and visible image fusion model based on significant target enhancement, aiming to inject thermal targets from infrared images into visible images to enhance target saliency while retaining important details in visible images. First, the source image is decomposed with multi-level Gaussian curvature filtering to obtain background information with high spatial resolution. Second, the large-scale layers are fused using ResNet50 and maximizing weights based on the average operator to improve detail retention. Finally, the base layers are fused by incorporating a new salient target detection method. The subjective and objective experimental results on TNO and MSRS datasets demonstrate that our method achieves better results compared to other traditional and deep learning-based methods.
现有的融合规则侧重于保留源图像中的详细信息,但由于红外图像中的热辐射信息主要以像素强度为特征,这些融合规则很可能导致融合图像中目标的显著性降低。为了解决这个问题,我们提出了一种基于显著目标增强的红外与可见光图像融合模型,旨在将红外图像中的热目标注入可见光图像中,以增强目标显著性,同时保留可见光图像中的重要细节。首先,利用多级高斯曲率滤波对源图像进行分解,以获得具有高空间分辨率的背景信息。其次,使用ResNet50并基于平均算子最大化权重对大尺度层进行融合,以提高细节保留能力。最后,通过结合一种新的显著目标检测方法对基础层进行融合。在TNO和MSRS数据集上的主观和客观实验结果表明,与其他传统方法和基于深度学习的方法相比,我们的方法取得了更好的效果。