Suppr超能文献

基于低通道激光雷达和相机传感器融合的自动驾驶车辆最近路径车辆估计。

Estimation of the Closest In-Path Vehicle by Low-Channel LiDAR and Camera Sensor Fusion for Autonomous Vehicles.

机构信息

Daegu Gyeongbuk Institute of Science & Technology (DGIST), College of Transdisciplinary Studies, Daegu 333, Korea.

Department of Interdisciplinary Engineering, Daegu Gyeongbuk Institute of Science & Technology (DGIST), Daegu 333, Korea.

出版信息

Sensors (Basel). 2021 Apr 30;21(9):3124. doi: 10.3390/s21093124.

Abstract

In autonomous driving, using a variety of sensors to recognize preceding vehicles at middle and long distances is helpful for improving driving performance and developing various functions. However, if only LiDAR or cameras are used in the recognition stage, it is difficult to obtain the necessary data due to the limitations of each sensor. In this paper, we proposed a method of converting the vision-tracked data into bird's eye-view (BEV) coordinates using an equation that projects LiDAR points onto an image and a method of fusion between LiDAR and vision-tracked data. Thus, the proposed method was effective through the results of detecting the closest in-path vehicle (CIPV) in various situations. In addition, even when experimenting with the EuroNCAP autonomous emergency braking (AEB) test protocol using the result of fusion, AEB performance was improved through improved cognitive performance than when using only LiDAR. In the experimental results, the performance of the proposed method was proven through actual vehicle tests in various scenarios. Consequently, it was convincing that the proposed sensor fusion method significantly improved the adaptive cruise control (ACC) function in autonomous maneuvering. We expect that this improvement in perception performance will contribute to improving the overall stability of ACC.

摘要

在自动驾驶中,使用各种传感器来识别中远距离的前车有助于提高驾驶性能和开发各种功能。然而,如果仅在识别阶段使用激光雷达或摄像头,由于每个传感器的局限性,很难获得必要的数据。在本文中,我们提出了一种使用将激光雷达点投影到图像上的方程将视觉跟踪数据转换为鸟瞰图 (BEV) 坐标的方法,以及一种激光雷达和视觉跟踪数据融合的方法。因此,通过在各种情况下检测最近的路径内车辆 (CIPV) 的结果证明了该方法的有效性。此外,即使在使用融合结果进行 EuroNCAP 自动紧急制动 (AEB) 测试协议的实验中,通过提高认知性能也比仅使用激光雷达提高了 AEB 性能。在实验结果中,通过在各种场景下的实际车辆测试证明了所提出方法的性能。因此,可以肯定的是,所提出的传感器融合方法显著提高了自动驾驶中的自适应巡航控制 (ACC) 功能。我们期望这种感知性能的提高将有助于提高 ACC 的整体稳定性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/71a8/8125378/83b653d88970/sensors-21-03124-g0A1.jpg

相似文献

2
Fast vehicle detection based on colored point cloud with bird's eye view representation.
Sci Rep. 2023 May 8;13(1):7447. doi: 10.1038/s41598-023-34479-z.
4
Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review.
Sensors (Basel). 2021 Mar 18;21(6):2140. doi: 10.3390/s21062140.
6
Free Space Detection Using Camera-LiDAR Fusion in a Bird's Eye View Plane.
Sensors (Basel). 2021 Nov 17;21(22):7623. doi: 10.3390/s21227623.
8
Robust Fusion of LiDAR and Wide-Angle Camera Data for Autonomous Mobile Robots.
Sensors (Basel). 2018 Aug 20;18(8):2730. doi: 10.3390/s18082730.
10
Delving Into the Devils of Bird's-Eye-View Perception: A Review, Evaluation and Recipe.
IEEE Trans Pattern Anal Mach Intell. 2024 Apr;46(4):2151-2170. doi: 10.1109/TPAMI.2023.3333838. Epub 2024 Mar 6.

引用本文的文献

1
Study on Multi-Heterogeneous Sensor Data Fusion Method Based on Millimeter-Wave Radar and Camera.
Sensors (Basel). 2023 Jun 29;23(13):6044. doi: 10.3390/s23136044.
2
Inter-row information recognition of maize in the middle and late stages LiDAR supplementary vision.
Front Plant Sci. 2022 Dec 1;13:1024360. doi: 10.3389/fpls.2022.1024360. eCollection 2022.
3

本文引用的文献

1
Calibration between color camera and 3D LIDAR instruments with a polygonal planar board.
Sensors (Basel). 2014 Mar 17;14(3):5333-53. doi: 10.3390/s140305333.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验