Guo Wei, Li Yongtao, Li Hanyan, Chen Ziyou, Xu Enyong, Wang Shanchao, Gu Chengdong
School of Mechanical and Automotive Engineering, Guangxi University of Science and Technology, Liuzhou 545616, China.
School of Automation, Guangxi University of Science and Technology, Liuzhou 545616, China.
Sensors (Basel). 2024 Sep 18;24(18):6025. doi: 10.3390/s24186025.
In response to the issue that the fusion process of infrared and visible images is easily affected by lighting factors, in this paper, we propose an adaptive illumination perception fusion mechanism, which was integrated into an infrared and visible image fusion network. Spatial attention mechanisms were applied to both infrared images and visible images for feature extraction. Deep convolutional neural networks were utilized for further feature information extraction. The adaptive illumination perception fusion mechanism is then integrated into the image reconstruction process to reduce the impact of lighting variations in the fused images. A Median Strengthening Channel and Spatial Attention Module (MSCS) was designed to be integrated into the backbone of YOLOv8. In this paper, we used the fusion network to create a dataset named for training the target recognition network. The experimental results indicated that the improved YOLOv8 network saw further enhancements of 2.3%, 1.4%, and 8.2% in the Recall, mAP50, and mAP50-95 metrics, respectively. The experiments revealed that the improved YOLOv8 network has advantages in terms of recognition rate and completeness, while also reducing the rates of false negatives and false positives.
针对红外与可见光图像融合过程易受光照因素影响的问题,本文提出一种自适应光照感知融合机制,并将其集成到红外与可见光图像融合网络中。对红外图像和可见光图像均应用空间注意力机制进行特征提取,利用深度卷积神经网络进一步提取特征信息。然后将自适应光照感知融合机制集成到图像重建过程中,以减少融合图像中光照变化的影响。设计了一种中值增强通道与空间注意力模块(MSCS)并将其集成到YOLOv8的主干中。本文利用融合网络创建了一个名为 的数据集来训练目标识别网络。实验结果表明,改进后的YOLOv8网络在召回率、mAP50和mAP50 - 95指标上分别进一步提高了2.3%、1.4%和8.2%。实验表明,改进后的YOLOv8网络在识别率和完整性方面具有优势,同时还降低了漏检率和误检率。