Suppr超能文献

LFIR-YOLO:用于红外车辆与行人检测的轻量级模型

LFIR-YOLO: Lightweight Model for Infrared Vehicle and Pedestrian Detection.

作者信息

Wang Quan, Liu Fengyuan, Cao Yi, Ullah Farhan, Zhou Muxiong

机构信息

School of Internet of Things Engineering, Wuxi University, Wuxi 214105, China.

School of Computer Science, Nanjing University of Information Science & Technology, Nanjing 210044, China.

出版信息

Sensors (Basel). 2024 Oct 14;24(20):6609. doi: 10.3390/s24206609.

Abstract

The complexity of urban road scenes at night and the inadequacy of visible light imaging in such conditions pose significant challenges. To address the issues of insufficient color information, texture detail, and low spatial resolution in infrared imagery, we propose an enhanced infrared detection model called LFIR-YOLO, which is built upon the YOLOv8 architecture. The primary goal is to improve the accuracy of infrared target detection in nighttime traffic scenarios while meeting practical deployment requirements. First, to address challenges such as limited contrast and occlusion noise in infrared images, the C2f module in the high-level backbone network is augmented with a module, incorporating multi-scale infrared contextual information to enhance feature extraction capabilities. Secondly, at the neck of the network, a mechanism is applied to fuse features and re-modulate both initial and advanced features, catering to the low signal-to-noise ratio and sparse detail features characteristic of infrared images. Third, a shared convolution strategy is employed in the detection head, replacing the decoupled head strategy and utilizing shared and operations to achieve lightweight yet precise improvements. Finally, loss functions, and are integrated into the model to better decouple infrared targets from the background and to enhance convergence speed. The experimental results on the FLIR and multispectral datasets show that the proposed LFIR-YOLO model achieves an improvement in detection accuracy of 4.3% and 2.6%, respectively, compared to the YOLOv8 model. Furthermore, the model demonstrates a reduction in parameters and computational complexity by 15.5% and 34%, respectively, enhancing its suitability for real-time deployment on resource-constrained edge devices.

摘要

夜间城市道路场景的复杂性以及可见光成像在这种条件下的不足带来了重大挑战。为了解决红外图像中颜色信息不足、纹理细节缺失和空间分辨率低的问题,我们提出了一种名为LFIR-YOLO的增强型红外检测模型,该模型基于YOLOv8架构构建。主要目标是提高夜间交通场景中红外目标检测的准确性,同时满足实际部署要求。首先,为了解决红外图像中对比度有限和遮挡噪声等挑战,在高级主干网络中的C2f模块增加了一个模块,融合多尺度红外上下文信息以增强特征提取能力。其次,在网络的颈部,应用一种机制来融合特征并对初始特征和高级特征进行重新调制,以适应红外图像低信噪比和稀疏细节特征的特点。第三,在检测头采用共享卷积策略,取代解耦头策略,利用共享的和操作实现轻量级但精确的改进。最后,将损失函数和集成到模型中,以更好地将红外目标与背景分离,并提高收敛速度。在FLIR和多光谱数据集上的实验结果表明,与YOLOv8模型相比,所提出的LFIR-YOLO模型的检测准确率分别提高了4.3%和2.6%。此外,该模型的参数和计算复杂度分别降低了15.5%和34%,增强了其在资源受限的边缘设备上进行实时部署的适用性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/a2269ab760ae/sensors-24-06609-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验