Liu Haipeng, Ma Meiyan, Wang Meng, Chen Zhaoyu, Zhao Yibo
Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China.
Yunnan Province Key Laboratory of Computer, Kunming University of Science and Technology, Kunming 650500, China.
Entropy (Basel). 2023 Jun 27;25(7):985. doi: 10.3390/e25070985.
The aim of infrared and visible image fusion is to integrate the complementary information of the two modalities for high-quality fused images. However, many deep learning fusion algorithms have not considered the characteristics of infrared images in low-light scenes, leading to the problems of weak texture details, low contrast of infrared targets and poor visual perception in the existing methods. Therefore, in this paper, we propose a salient compensation-based fusion method that makes sufficient use of the characteristics of infrared and visible images to generate high-quality fused images under low-light conditions. First, we design a multi-scale edge gradient module (MEGB) in the texture mainstream to adequately extract the texture information of the dual input of infrared and visible images; on the other hand, the salient tributary is pre-trained by salient loss to obtain the saliency map based on the salient dense residual module (SRDB) to extract salient features, which is supplemented in the process of overall network training. We propose the spatial bias module (SBM) to fuse global information with local information. Finally, extensive comparison experiments with existing methods show that our method has significant advantages in describing target features and global scenes, the effectiveness of the proposed module is demonstrated by ablation experiments. In addition, we also verify the facilitation of this paper's method for high-level vision on a semantic segmentation task.
红外与可见光图像融合的目的是整合这两种模态的互补信息,以获得高质量的融合图像。然而,许多深度学习融合算法没有考虑低光照场景下红外图像的特点,导致现有方法存在纹理细节薄弱、红外目标对比度低以及视觉感知效果差等问题。因此,在本文中,我们提出了一种基于显著补偿的融合方法,该方法充分利用红外和可见光图像的特点,在低光照条件下生成高质量的融合图像。首先,我们在纹理主流中设计了一个多尺度边缘梯度模块(MEGB),以充分提取红外和可见光图像双输入的纹理信息;另一方面,显著支流通过显著损失进行预训练,基于显著密集残差模块(SRDB)获得显著图以提取显著特征,并在整个网络训练过程中进行补充。我们提出了空间偏差模块(SBM),将全局信息与局部信息进行融合。最后,与现有方法进行的广泛比较实验表明,我们的方法在描述目标特征和全局场景方面具有显著优势,消融实验证明了所提模块的有效性。此外,我们还在语义分割任务上验证了本文方法对高级视觉的促进作用。