• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

采用 YOLOv4 架构实现自动驾驶中的低延迟多光谱行人检测。

Adopting the YOLOv4 Architecture for Low-Latency Multispectral Pedestrian Detection in Autonomous Driving.

机构信息

Institute of Robotics and Machine Intelligence, Poznan University of Technology, 60-965 Poznan, Poland.

出版信息

Sensors (Basel). 2022 Jan 30;22(3):1082. doi: 10.3390/s22031082.

DOI:10.3390/s22031082
PMID:35161827
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8837921/
Abstract

Detecting pedestrians in autonomous driving is a safety-critical task, and the decision to avoid a a person has to be made with minimal latency. Multispectral approaches that combine RGB and thermal images are researched extensively, as they make it possible to gain robustness under varying illumination and weather conditions. State-of-the-art solutions employing deep neural networks offer high accuracy of pedestrian detection. However, the literature is short of works that evaluate multispectral pedestrian detection with respect to its feasibility in obstacle avoidance scenarios, taking into account the motion of the vehicle. Therefore, we investigated the real-time neural network detector architecture You Only Look Once, the latest version (YOLOv4), and demonstrate that this detector can be adapted to multispectral pedestrian detection. It can achieve accuracy on par with the state-of-the-art while being highly computationally efficient, thereby supporting low-latency decision making. The results achieved on the KAIST dataset were evaluated from the perspective of automotive applications, where low latency and a low number of false negatives are critical parameters. The middle fusion approach to YOLOv4 in its Tiny variant achieved the best accuracy to computational efficiency trade-off among the evaluated architectures.

摘要

在自动驾驶中检测行人是一项安全关键任务,必须以最小的延迟做出避免行人的决策。广泛研究了结合 RGB 和热图像的多光谱方法,因为它们可以在不同的光照和天气条件下实现鲁棒性。采用深度神经网络的最新解决方案提供了行人检测的高精度。然而,文献中缺乏关于多光谱行人检测在考虑车辆运动的避障场景中的可行性的评估工作。因此,我们研究了实时神经网络检测器架构 You Only Look Once,最新版本 (YOLOv4),并证明可以将其应用于多光谱行人检测。它可以在具有高度计算效率的同时实现与最先进水平相当的准确性,从而支持低延迟决策。从汽车应用的角度评估了在 KAIST 数据集上获得的结果,其中低延迟和低误报率是关键参数。在其 Tiny 变体中,YOLOv4 的中间融合方法在评估的架构中实现了最佳的准确性与计算效率的权衡。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/07eb/8837921/c1a3419bc647/sensors-22-01082-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/07eb/8837921/7dd3ce5c2e88/sensors-22-01082-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/07eb/8837921/f9beaaf5d1eb/sensors-22-01082-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/07eb/8837921/373e5fa5eed9/sensors-22-01082-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/07eb/8837921/8ec1b6623b22/sensors-22-01082-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/07eb/8837921/81599186fe70/sensors-22-01082-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/07eb/8837921/f69e8cc779cb/sensors-22-01082-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/07eb/8837921/fae1cbe80397/sensors-22-01082-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/07eb/8837921/8eb9af8aa794/sensors-22-01082-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/07eb/8837921/92ce854ad584/sensors-22-01082-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/07eb/8837921/9542070303d9/sensors-22-01082-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/07eb/8837921/c581559db81b/sensors-22-01082-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/07eb/8837921/c531fd3c0e19/sensors-22-01082-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/07eb/8837921/c1a3419bc647/sensors-22-01082-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/07eb/8837921/7dd3ce5c2e88/sensors-22-01082-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/07eb/8837921/f9beaaf5d1eb/sensors-22-01082-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/07eb/8837921/373e5fa5eed9/sensors-22-01082-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/07eb/8837921/8ec1b6623b22/sensors-22-01082-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/07eb/8837921/81599186fe70/sensors-22-01082-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/07eb/8837921/f69e8cc779cb/sensors-22-01082-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/07eb/8837921/fae1cbe80397/sensors-22-01082-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/07eb/8837921/8eb9af8aa794/sensors-22-01082-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/07eb/8837921/92ce854ad584/sensors-22-01082-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/07eb/8837921/9542070303d9/sensors-22-01082-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/07eb/8837921/c581559db81b/sensors-22-01082-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/07eb/8837921/c531fd3c0e19/sensors-22-01082-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/07eb/8837921/c1a3419bc647/sensors-22-01082-g013.jpg

相似文献

1
Adopting the YOLOv4 Architecture for Low-Latency Multispectral Pedestrian Detection in Autonomous Driving.采用 YOLOv4 架构实现自动驾驶中的低延迟多光谱行人检测。
Sensors (Basel). 2022 Jan 30;22(3):1082. doi: 10.3390/s22031082.
2
Attention Fusion for One-Stage Multispectral Pedestrian Detection.基于注意融合的单阶段多光谱行人检测
Sensors (Basel). 2021 Jun 18;21(12):4184. doi: 10.3390/s21124184.
3
Pedestrian Detection Using Multispectral Images and a Deep Neural Network.基于多光谱图像和深度神经网络的行人检测。
Sensors (Basel). 2021 Apr 4;21(7):2536. doi: 10.3390/s21072536.
4
An Unsupervised Transfer Learning Framework for Visible-Thermal Pedestrian Detection.基于无监督迁移学习的可见光-热行人检测框架。
Sensors (Basel). 2022 Jun 10;22(12):4416. doi: 10.3390/s22124416.
5
Deep Multimodal Detection in Reduced Visibility Using Thermal Depth Estimation for Autonomous Driving.使用热景深估计进行自主驾驶的低能见度下深度多模态检测。
Sensors (Basel). 2022 Jul 6;22(14):5084. doi: 10.3390/s22145084.
6
A Lightweight Vehicle-Pedestrian Detection Algorithm Based on Attention Mechanism in Traffic Scenarios.基于注意力机制的交通场景下轻量级车-人检测算法。
Sensors (Basel). 2022 Nov 4;22(21):8480. doi: 10.3390/s22218480.
7
Deep Visible and Thermal Image Fusion for Enhanced Pedestrian Visibility.深度可见光与热图像融合增强行人可见度。
Sensors (Basel). 2019 Aug 28;19(17):3727. doi: 10.3390/s19173727.
8
INSANet: INtra-INter Spectral Attention Network for Effective Feature Fusion of Multispectral Pedestrian Detection.INSANet:用于多光谱行人检测有效特征融合的内部-外部光谱注意力网络
Sensors (Basel). 2024 Feb 10;24(4):1168. doi: 10.3390/s24041168.
9
Towards High Accuracy Pedestrian Detection on Edge GPUs.面向边缘 GPU 的高精度行人检测。
Sensors (Basel). 2022 Aug 10;22(16):5980. doi: 10.3390/s22165980.
10
A Preliminary Study of Deep Learning Sensor Fusion for Pedestrian Detection.深度学习传感器融合在行人检测中的初步研究。
Sensors (Basel). 2023 Apr 21;23(8):4167. doi: 10.3390/s23084167.

引用本文的文献

1
Fusion of Visible and Infrared Aerial Images from Uncalibrated Sensors Using Wavelet Decomposition and Deep Learning.使用小波分解和深度学习融合来自未校准传感器的可见光和红外航空图像
Sensors (Basel). 2024 Dec 23;24(24):8217. doi: 10.3390/s24248217.
2
A Survey on Sensor Failures in Autonomous Vehicles: Challenges and Solutions.自动驾驶车辆传感器故障调查:挑战与解决方案
Sensors (Basel). 2024 Aug 7;24(16):5108. doi: 10.3390/s24165108.
3
Sensor-Fused Nighttime System for Enhanced Pedestrian Detection in ADAS and Autonomous Vehicles.用于增强ADAS和自动驾驶车辆中行人检测的传感器融合夜间系统

本文引用的文献

1
Real-Time Detection of Non-Stationary Objects Using Intensity Data in Automotive LiDAR SLAM.在汽车激光雷达同步定位与地图构建中利用强度数据对非静止物体进行实时检测
Sensors (Basel). 2021 Oct 13;21(20):6781. doi: 10.3390/s21206781.
2
Attention Fusion for One-Stage Multispectral Pedestrian Detection.基于注意融合的单阶段多光谱行人检测
Sensors (Basel). 2021 Jun 18;21(12):4184. doi: 10.3390/s21124184.
3
Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review.自动驾驶车辆中的传感器与传感器融合技术:综述。
Sensors (Basel). 2024 Jul 22;24(14):4755. doi: 10.3390/s24144755.
4
Object Detection, Recognition, and Tracking Algorithms for ADASs-A Study on Recent Trends.用于高级驾驶辅助系统的目标检测、识别和跟踪算法——近期趋势研究
Sensors (Basel). 2023 Dec 31;24(1):249. doi: 10.3390/s24010249.
5
Multispectral Benchmark Dataset and Baseline for Forklift Collision Avoidance.多光谱基准数据集和叉车防撞基线。
Sensors (Basel). 2022 Oct 19;22(20):7953. doi: 10.3390/s22207953.
6
Comparison of Pedestrian Detectors for LiDAR Sensor Trained on Custom Synthetic, Real and Mixed Datasets.基于定制合成、真实和混合数据集训练的激光雷达传感器行人检测比较。
Sensors (Basel). 2022 Sep 16;22(18):7014. doi: 10.3390/s22187014.
7
A Thermal Infrared Pedestrian-Detection Method for Edge Computing Devices.一种用于边缘计算设备的热红外行人检测方法。
Sensors (Basel). 2022 Sep 5;22(17):6710. doi: 10.3390/s22176710.
8
YOLOv5-AC: Attention Mechanism-Based Lightweight YOLOv5 for Track Pedestrian Detection.YOLOv5-AC:基于注意力机制的轻量级 YOLOv5 用于跟踪行人检测。
Sensors (Basel). 2022 Aug 7;22(15):5903. doi: 10.3390/s22155903.
9
An Unsupervised Transfer Learning Framework for Visible-Thermal Pedestrian Detection.基于无监督迁移学习的可见光-热行人检测框架。
Sensors (Basel). 2022 Jun 10;22(12):4416. doi: 10.3390/s22124416.
Sensors (Basel). 2021 Mar 18;21(6):2140. doi: 10.3390/s21062140.
4
Design of a Scalable and Fast YOLO for Edge-Computing Devices.用于边缘计算设备的可扩展快速 YOLO 设计。
Sensors (Basel). 2020 Nov 27;20(23):6779. doi: 10.3390/s20236779.
5
Object Detection With Deep Learning: A Review.基于深度学习的目标检测研究综述。
IEEE Trans Neural Netw Learn Syst. 2019 Nov;30(11):3212-3232. doi: 10.1109/TNNLS.2018.2876865. Epub 2019 Jan 28.
6
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.更快的 R-CNN:基于区域建议网络的实时目标检测。
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149. doi: 10.1109/TPAMI.2016.2577031. Epub 2016 Jun 6.
7
Using local binary patterns as features for classification of dolphin calls.使用局部二值模式作为海豚叫声分类的特征。
J Acoust Soc Am. 2013 Jul;134(1):EL105-11. doi: 10.1121/1.4811162.