• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于多传感器系统的自动驾驶车辆三维易损物检测评估。

Evaluation of 3D Vulnerable Objects' Detection Using a Multi-Sensors System for Autonomous Vehicles.

机构信息

Arab Academy for Science, Technology and Maritime Transport, Alexandria 1029, Egypt.

School of Engineering, University of Central Lancashire, Preston PR1 2HE, UK.

出版信息

Sensors (Basel). 2022 Feb 21;22(4):1663. doi: 10.3390/s22041663.

DOI:10.3390/s22041663
PMID:35214569
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8874666/
Abstract

One of the primary tasks undertaken by autonomous vehicles (AVs) is object detection, which comes ahead of object tracking, trajectory estimation, and collision avoidance. Vulnerable road objects (e.g., pedestrians, cyclists, etc.) pose a greater challenge to the reliability of object detection operations due to their continuously changing behavior. The majority of commercially available AVs, and research into them, depends on employing expensive sensors. However, this hinders the development of further research on the operations of AVs. In this paper, therefore, we focus on the use of a lower-cost single-beam LiDAR in addition to a monocular camera to achieve multiple 3D vulnerable object detection in real driving scenarios, all the while maintaining real-time performance. This research also addresses the problems faced during object detection, such as the complex interaction between objects where occlusion and truncation occur, and the dynamic changes in the perspective and scale of bounding boxes. The video-processing module works upon a deep-learning detector (YOLOv3), while the LiDAR measurements are pre-processed and grouped into clusters. The output of the proposed system is objects classification and localization by having bounding boxes accompanied by a third depth dimension acquired by the LiDAR. Real-time tests show that the system can efficiently detect the 3D location of vulnerable objects in real-time scenarios.

摘要

自动驾驶车辆(AV)的主要任务之一是目标检测,它先于目标跟踪、轨迹估计和避碰。由于脆弱道路目标(例如行人和骑自行车的人等)的行为不断变化,因此对目标检测操作的可靠性构成了更大的挑战。大多数市售的 AV 及其研究都依赖于使用昂贵的传感器。然而,这阻碍了对 AV 操作的进一步研究。因此,在本文中,我们专注于在实际驾驶场景中使用低成本的单光束 LiDAR 与单目相机来实现多个 3D 脆弱目标检测,同时保持实时性能。本研究还解决了目标检测中面临的问题,例如遮挡和截断等物体之间的复杂交互以及边界框的视角和比例的动态变化。视频处理模块基于深度学习检测器(YOLOv3)运行,而 LiDAR 测量值经过预处理并分组为聚类。所提出系统的输出是通过 LiDAR 获得的带有第三个深度维度的边界框对对象分类和定位。实时测试表明,该系统可以在实时场景中有效地检测脆弱目标的 3D 位置。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/44b4/8874666/0581f96534ea/sensors-22-01663-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/44b4/8874666/28e15ae81fde/sensors-22-01663-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/44b4/8874666/964a6fbdbe84/sensors-22-01663-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/44b4/8874666/52a431d5ab08/sensors-22-01663-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/44b4/8874666/dac7b49d50a0/sensors-22-01663-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/44b4/8874666/3631e8ec18e9/sensors-22-01663-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/44b4/8874666/9fa339ea7ddd/sensors-22-01663-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/44b4/8874666/6f13bdb3fdf7/sensors-22-01663-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/44b4/8874666/0581f96534ea/sensors-22-01663-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/44b4/8874666/28e15ae81fde/sensors-22-01663-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/44b4/8874666/964a6fbdbe84/sensors-22-01663-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/44b4/8874666/52a431d5ab08/sensors-22-01663-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/44b4/8874666/dac7b49d50a0/sensors-22-01663-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/44b4/8874666/3631e8ec18e9/sensors-22-01663-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/44b4/8874666/9fa339ea7ddd/sensors-22-01663-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/44b4/8874666/6f13bdb3fdf7/sensors-22-01663-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/44b4/8874666/0581f96534ea/sensors-22-01663-g008.jpg

相似文献

1
Evaluation of 3D Vulnerable Objects' Detection Using a Multi-Sensors System for Autonomous Vehicles.基于多传感器系统的自动驾驶车辆三维易损物检测评估。
Sensors (Basel). 2022 Feb 21;22(4):1663. doi: 10.3390/s22041663.
2
A Survey on Deep-Learning-Based LiDAR 3D Object Detection for Autonomous Driving.基于深度学习的自动驾驶激光雷达 3D 目标检测研究综述。
Sensors (Basel). 2022 Dec 7;22(24):9577. doi: 10.3390/s22249577.
3
Real-Time 3D Multi-Object Detection and Localization Based on Deep Learning for Road and Railway Smart Mobility.基于深度学习的道路和铁路智能交通实时3D多目标检测与定位
J Imaging. 2021 Aug 12;7(8):145. doi: 10.3390/jimaging7080145.
4
A New 3D Object Pose Detection Method Using LIDAR Shape Set.一种使用激光雷达形状集的新型三维物体姿态检测方法。
Sensors (Basel). 2018 Mar 16;18(3):882. doi: 10.3390/s18030882.
5
Systematic and Comprehensive Review of Clustering and Multi-Target Tracking Techniques for LiDAR Point Clouds in Autonomous Driving Applications.自动驾驶应用中激光雷达点云的聚类和多目标跟踪技术的系统全面综述。
Sensors (Basel). 2023 Jul 3;23(13):6119. doi: 10.3390/s23136119.
6
A Survey on Ground Segmentation Methods for Automotive LiDAR Sensors.汽车激光雷达传感器地面分割方法研究综述。
Sensors (Basel). 2023 Jan 5;23(2):601. doi: 10.3390/s23020601.
7
Real-Time 3D Object Detection and SLAM Fusion in a Low-Cost LiDAR Test Vehicle Setup.低成本激光雷达测试车中实时 3D 目标检测与 SLAM 融合。
Sensors (Basel). 2021 Dec 15;21(24):8381. doi: 10.3390/s21248381.
8
A Preliminary Study of Deep Learning Sensor Fusion for Pedestrian Detection.深度学习传感器融合在行人检测中的初步研究。
Sensors (Basel). 2023 Apr 21;23(8):4167. doi: 10.3390/s23084167.
9
Critical voxel learning with vision transformer and derivation of logical AV safety assessment scenarios.基于视觉转换器的关键体素学习和逻辑视听安全性评估场景的推导。
Accid Anal Prev. 2024 Feb;195:107422. doi: 10.1016/j.aap.2023.107422. Epub 2023 Dec 8.
10
Up-Sampling Method for Low-Resolution LiDAR Point Cloud to Enhance 3D Object Detection in an Autonomous Driving Environment.用于增强自动驾驶环境中 3D 目标检测的低分辨率 LiDAR 点云的上采样方法。
Sensors (Basel). 2022 Dec 28;23(1):322. doi: 10.3390/s23010322.

引用本文的文献

1
A Multi-Sensor Fusion Approach Based on PIR and Ultrasonic Sensors Installed on a Robot to Localise People in Indoor Environments.一种基于安装在机器人上的被动红外(PIR)和超声波传感器的多传感器融合方法,用于在室内环境中对人员进行定位。
Sensors (Basel). 2023 Aug 5;23(15):6963. doi: 10.3390/s23156963.
2
Efficient three-dimensional point cloud object detection based on improved Complex-YOLO.基于改进型Complex-YOLO的高效三维点云目标检测
Front Neurorobot. 2023 Feb 16;17:1092564. doi: 10.3389/fnbot.2023.1092564. eCollection 2023.

本文引用的文献

1
A Survey of Autonomous Vehicles: Enabling Communication Technologies and Challenges.自动驾驶汽车综述:使能通信技术与挑战。
Sensors (Basel). 2021 Jan 21;21(3):706. doi: 10.3390/s21030706.
2
A Systematic Review of Perception System and Simulators for Autonomous Vehicles Research.自动驾驶车辆研究中的感知系统及模拟器的系统评价综述
Sensors (Basel). 2019 Feb 5;19(3):648. doi: 10.3390/s19030648.
3
Structure-From-Motion in 3D Space Using 2D Lidars.使用二维激光雷达在三维空间中进行运动结构重建
Sensors (Basel). 2017 Feb 3;17(2):242. doi: 10.3390/s17020242.
4
Review of visual odometry: types, approaches, challenges, and applications.视觉里程计综述:类型、方法、挑战及应用
Springerplus. 2016 Oct 28;5(1):1897. doi: 10.1186/s40064-016-3573-7. eCollection 2016.