Yan Bao, Zhao Longjie, Miao Kehua, Wang Song, Li Qinghua, Luo Delin
School of Aerospace Engineering, Xiamen University, Xiamen 361102, China.
Electric Power Research Institute, China Southern Power Grid, Guangzhou 510063, China.
Sensors (Basel). 2024 Mar 7;24(6):1735. doi: 10.3390/s24061735.
The fusion of infrared and visible images is a well-researched task in computer vision. These fusion methods create fused images replacing the manual observation of single sensor image, often deployed on edge devices for real-time processing. However, there is an issue of information imbalance between infrared and visible images. Existing methods often fail to emphasize temperature and edge texture information, potentially leading to misinterpretations. Moreover, these methods are computationally complex, and challenging for edge device adaptation. This paper proposes a method that calculates the distribution proportion of infrared pixel values, allocating fusion weights to adaptively highlight key information. It introduces a weight allocation mechanism and MobileBlock with a multispectral information complementary module, innovations which strengthened the model's fusion capabilities, made it more lightweight, and ensured information compensation. Training involves a temperature-color-perception loss function, enabling adaptive weight allocation based on image pair information. Experimental results show superiority over mainstream fusion methods, particularly in the electric power equipment scene and publicly available datasets.
红外与可见光图像融合是计算机视觉领域一项经过充分研究的任务。这些融合方法生成融合图像,取代了对单传感器图像的人工观察,常用于边缘设备进行实时处理。然而,红外图像与可见光图像之间存在信息不平衡的问题。现有方法往往无法突出温度和边缘纹理信息,可能导致误解。此外,这些方法计算复杂,难以适应边缘设备。本文提出一种方法,计算红外像素值的分布比例,分配融合权重以自适应突出关键信息。它引入了权重分配机制和带有多光谱信息互补模块的移动块,这些创新增强了模型的融合能力,使其更轻量级,并确保了信息补偿。训练采用温度-颜色感知损失函数,能够基于图像对信息进行自适应权重分配。实验结果表明,该方法优于主流融合方法,尤其在电力设备场景和公开可用数据集上表现突出。