• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

LFIR-YOLO:用于红外车辆与行人检测的轻量级模型

LFIR-YOLO: Lightweight Model for Infrared Vehicle and Pedestrian Detection.

作者信息

Wang Quan, Liu Fengyuan, Cao Yi, Ullah Farhan, Zhou Muxiong

机构信息

School of Internet of Things Engineering, Wuxi University, Wuxi 214105, China.

School of Computer Science, Nanjing University of Information Science & Technology, Nanjing 210044, China.

出版信息

Sensors (Basel). 2024 Oct 14;24(20):6609. doi: 10.3390/s24206609.

DOI:10.3390/s24206609
PMID:39460089
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11511348/
Abstract

The complexity of urban road scenes at night and the inadequacy of visible light imaging in such conditions pose significant challenges. To address the issues of insufficient color information, texture detail, and low spatial resolution in infrared imagery, we propose an enhanced infrared detection model called LFIR-YOLO, which is built upon the YOLOv8 architecture. The primary goal is to improve the accuracy of infrared target detection in nighttime traffic scenarios while meeting practical deployment requirements. First, to address challenges such as limited contrast and occlusion noise in infrared images, the C2f module in the high-level backbone network is augmented with a module, incorporating multi-scale infrared contextual information to enhance feature extraction capabilities. Secondly, at the neck of the network, a mechanism is applied to fuse features and re-modulate both initial and advanced features, catering to the low signal-to-noise ratio and sparse detail features characteristic of infrared images. Third, a shared convolution strategy is employed in the detection head, replacing the decoupled head strategy and utilizing shared and operations to achieve lightweight yet precise improvements. Finally, loss functions, and are integrated into the model to better decouple infrared targets from the background and to enhance convergence speed. The experimental results on the FLIR and multispectral datasets show that the proposed LFIR-YOLO model achieves an improvement in detection accuracy of 4.3% and 2.6%, respectively, compared to the YOLOv8 model. Furthermore, the model demonstrates a reduction in parameters and computational complexity by 15.5% and 34%, respectively, enhancing its suitability for real-time deployment on resource-constrained edge devices.

摘要

夜间城市道路场景的复杂性以及可见光成像在这种条件下的不足带来了重大挑战。为了解决红外图像中颜色信息不足、纹理细节缺失和空间分辨率低的问题,我们提出了一种名为LFIR-YOLO的增强型红外检测模型,该模型基于YOLOv8架构构建。主要目标是提高夜间交通场景中红外目标检测的准确性,同时满足实际部署要求。首先,为了解决红外图像中对比度有限和遮挡噪声等挑战,在高级主干网络中的C2f模块增加了一个模块,融合多尺度红外上下文信息以增强特征提取能力。其次,在网络的颈部,应用一种机制来融合特征并对初始特征和高级特征进行重新调制,以适应红外图像低信噪比和稀疏细节特征的特点。第三,在检测头采用共享卷积策略,取代解耦头策略,利用共享的和操作实现轻量级但精确的改进。最后,将损失函数和集成到模型中,以更好地将红外目标与背景分离,并提高收敛速度。在FLIR和多光谱数据集上的实验结果表明,与YOLOv8模型相比,所提出的LFIR-YOLO模型的检测准确率分别提高了4.3%和2.6%。此外,该模型的参数和计算复杂度分别降低了15.5%和34%,增强了其在资源受限的边缘设备上进行实时部署的适用性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/2767e9543fe5/sensors-24-06609-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/a2269ab760ae/sensors-24-06609-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/9f3fbcbbff1b/sensors-24-06609-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/03a81e173ff3/sensors-24-06609-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/818f6bcf45ac/sensors-24-06609-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/eddb1db23b86/sensors-24-06609-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/7d6fd710e0ca/sensors-24-06609-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/2958d1c27079/sensors-24-06609-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/accd5281f814/sensors-24-06609-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/6b8c818e0369/sensors-24-06609-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/292220c904a3/sensors-24-06609-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/5395b0d15ea2/sensors-24-06609-g011a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/10319dd90751/sensors-24-06609-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/8049dbe05642/sensors-24-06609-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/42787ace9d03/sensors-24-06609-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/803cb5cecd60/sensors-24-06609-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/2767e9543fe5/sensors-24-06609-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/a2269ab760ae/sensors-24-06609-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/9f3fbcbbff1b/sensors-24-06609-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/03a81e173ff3/sensors-24-06609-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/818f6bcf45ac/sensors-24-06609-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/eddb1db23b86/sensors-24-06609-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/7d6fd710e0ca/sensors-24-06609-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/2958d1c27079/sensors-24-06609-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/accd5281f814/sensors-24-06609-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/6b8c818e0369/sensors-24-06609-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/292220c904a3/sensors-24-06609-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/5395b0d15ea2/sensors-24-06609-g011a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/10319dd90751/sensors-24-06609-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/8049dbe05642/sensors-24-06609-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/42787ace9d03/sensors-24-06609-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/803cb5cecd60/sensors-24-06609-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d551/11511348/2767e9543fe5/sensors-24-06609-g016.jpg

相似文献

1
LFIR-YOLO: Lightweight Model for Infrared Vehicle and Pedestrian Detection.LFIR-YOLO:用于红外车辆与行人检测的轻量级模型
Sensors (Basel). 2024 Oct 14;24(20):6609. doi: 10.3390/s24206609.
2
IV-YOLO: A Lightweight Dual-Branch Object Detection Network.IV-YOLO:一种轻量级双分支目标检测网络。
Sensors (Basel). 2024 Sep 24;24(19):6181. doi: 10.3390/s24196181.
3
A Lightweight Strip Steel Surface Defect Detection Network Based on Improved YOLOv8.一种基于改进YOLOv8的轻质带钢表面缺陷检测网络。
Sensors (Basel). 2024 Oct 9;24(19):6495. doi: 10.3390/s24196495.
4
A lightweight Yunnan Xiaomila detection and pose estimation based on improved YOLOv8.一种基于改进YOLOv8的轻量化云南小米辣检测与姿态估计
Front Plant Sci. 2024 Jun 5;15:1421381. doi: 10.3389/fpls.2024.1421381. eCollection 2024.
5
A Method for Real-Time Recognition of Safflower Filaments in Unstructured Environments Using the YOLO-SaFi Model.一种使用YOLO-SaFi模型在非结构化环境中实时识别红花花丝的方法。
Sensors (Basel). 2024 Jul 8;24(13):4410. doi: 10.3390/s24134410.
6
MRD-YOLO: A Multispectral Object Detection Algorithm for Complex Road Scenes.MRD-YOLO:一种用于复杂道路场景的多光谱目标检测算法。
Sensors (Basel). 2024 May 18;24(10):3222. doi: 10.3390/s24103222.
7
An Infrared Image Defect Detection Method for Steel Based on Regularized YOLO.一种基于正则化YOLO的钢材红外图像缺陷检测方法
Sensors (Basel). 2024 Mar 5;24(5):1674. doi: 10.3390/s24051674.
8
A Lightweight underwater detector enhanced by Attention mechanism, GSConv and WIoU on YOLOv8.一种基于YOLOv8的、通过注意力机制、GSConv和加权交并比(WIoU)增强的轻量级水下探测器。
Sci Rep. 2024 Oct 28;14(1):25797. doi: 10.1038/s41598-024-75809-z.
9
YOLOv8-RMDA: Lightweight YOLOv8 Network for Early Detection of Small Target Diseases in Tea.YOLOv8-RMDA:用于茶中早期检测小目标疾病的轻量级 YOLOv8 网络。
Sensors (Basel). 2024 May 1;24(9):2896. doi: 10.3390/s24092896.
10
Lightweight Substation Equipment Defect Detection Algorithm for Small Targets.用于小目标的轻量化变电站设备缺陷检测算法
Sensors (Basel). 2024 Sep 12;24(18):5914. doi: 10.3390/s24185914.

本文引用的文献

1
Infrared Dim Small Target Detection Networks: A Review.红外弱小目标检测网络:综述
Sensors (Basel). 2024 Jun 15;24(12):3885. doi: 10.3390/s24123885.
2
Personnel Detection in Dark Aquatic Environments Based on Infrared Thermal Imaging Technology and an Improved YOLOv5s Model.基于红外热成像技术和改进的YOLOv5s模型的黑暗水生环境中的人员检测
Sensors (Basel). 2024 May 23;24(11):3321. doi: 10.3390/s24113321.
3
Deep Learning for Visual Speech Analysis: A Survey.深度学习在可视语音分析中的应用:综述
IEEE Trans Pattern Anal Mach Intell. 2024 Sep;46(9):6001-6022. doi: 10.1109/TPAMI.2024.3376710. Epub 2024 Aug 7.
4
DEA-Net: Single Image Dehazing Based on Detail-Enhanced Convolution and Content-Guided Attention.DEA-Net:基于细节增强卷积和内容引导注意力的单图像去雾
IEEE Trans Image Process. 2024;33:1002-1015. doi: 10.1109/TIP.2024.3354108. Epub 2024 Jan 26.
5
Powerful-IoU: More straightforward and faster bounding box regression loss with a nonmonotonic focusing mechanism.强大交并比(Powerful-IoU):一种具有非单调聚焦机制的更直接、更快的边界框回归损失。
Neural Netw. 2024 Feb;170:276-284. doi: 10.1016/j.neunet.2023.11.041. Epub 2023 Nov 22.
6
YOLO-IR-Free: An Improved Algorithm for Real-Time Detection of Vehicles in Infrared Images.YOLO-IR-Free:一种用于红外图像中车辆实时检测的改进算法。
Sensors (Basel). 2023 Oct 26;23(21):8723. doi: 10.3390/s23218723.
7
Object detection using YOLO: challenges, architectural successors, datasets and applications.使用YOLO进行目标检测:挑战、架构继任者、数据集及应用
Multimed Tools Appl. 2023;82(6):9243-9275. doi: 10.1007/s11042-022-13644-y. Epub 2022 Aug 8.
8
FCOS: A Simple and Strong Anchor-Free Object Detector.FCOS:一种简单且强大的无锚框目标检测器。
IEEE Trans Pattern Anal Mach Intell. 2022 Apr;44(4):1922-1933. doi: 10.1109/TPAMI.2020.3032166. Epub 2022 Mar 4.
9
Automated Vehicles and Pedestrian Safety: Exploring the Promise and Limits of Pedestrian Detection.自动驾驶汽车与行人安全:探索行人检测的前景与局限
Am J Prev Med. 2019 Jan;56(1):1-7. doi: 10.1016/j.amepre.2018.06.024. Epub 2018 Oct 15.
10
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.更快的 R-CNN:基于区域建议网络的实时目标检测。
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149. doi: 10.1109/TPAMI.2016.2577031. Epub 2016 Jun 6.