Favelli Stefano, Xie Meng, Tonoli Andrea
Center for Automotive Research and Sustainable Mobility (CARS@PoliTO), Politecnico di Torino, 10129 Torino, Italy.
Dipartimento di Ingegneria Meccanica e Aerospaziale (DIMEAS), Politecnico di Torino, 10129 Torino, Italy.
Sensors (Basel). 2024 Dec 10;24(24):7895. doi: 10.3390/s24247895.
The fusion of multiple sensors' data in real-time is a crucial process for autonomous and assisted driving, where high-level controllers need classification of objects in the surroundings and estimation of relative positions. This paper presents an open-source framework to estimate the distance between a vehicle equipped with sensors and different road objects on its path using the fusion of data from cameras, radars, and LiDARs. The target application is an Advanced Driving Assistance System (ADAS) that benefits from the integration of the sensors' attributes to plan the vehicle's speed according to real-time road occupation and distance from obstacles. Based on geometrical projection, a low-level sensor fusion approach is proposed to map 3D point clouds into 2D camera images. The fusion information is used to estimate the distance of objects detected and labeled by a Yolov7 detector. The open-source pipeline implemented in ROS consists of a sensors' calibration method, a Yolov7 detector, 3D point cloud downsampling and clustering, and finally a 3D-to-2D transformation between the reference frames. The goal of the pipeline is to perform data association and estimate the distance of the identified road objects. The accuracy and performance are evaluated in real-world urban scenarios with commercial hardware. The pipeline running on an embedded Nvidia Jetson AGX achieves good accuracy on object identification and distance estimation, running at 5 Hz. The proposed framework introduces a flexible and resource-efficient method for data association from common automotive sensors and proves to be a promising solution for enabling effective environment perception ability for assisted driving.
多个传感器数据的实时融合是自动驾驶和辅助驾驶的关键过程,在此过程中,高级控制器需要对周围环境中的物体进行分类并估计相对位置。本文提出了一个开源框架,用于通过融合来自摄像头、雷达和激光雷达的数据,估计配备传感器的车辆与其路径上不同道路物体之间的距离。目标应用是一种高级驾驶辅助系统(ADAS),该系统受益于传感器属性的集成,能够根据实时道路占用情况和与障碍物的距离来规划车辆速度。基于几何投影,提出了一种低级传感器融合方法,将三维点云映射到二维相机图像中。融合信息用于估计由Yolov7检测器检测和标记的物体的距离。在ROS中实现的开源管道包括传感器校准方法、Yolov7检测器、三维点云下采样和聚类,最后是参考系之间的三维到二维变换。该管道的目标是进行数据关联并估计已识别道路物体的距离。利用商用硬件在实际城市场景中对准确性和性能进行了评估。在嵌入式英伟达Jetson AGX上运行的管道在物体识别和距离估计方面取得了良好的准确性,运行频率为5Hz。所提出的框架引入了一种灵活且资源高效的数据关联方法,该方法来自常见的汽车传感器,并被证明是一种有前途的解决方案,可用于实现有效的辅助驾驶环境感知能力。