Daegu Gyeongbuk Institute of Science & Technology (DGIST), College of Transdisciplinary Studies, Daegu 333, Korea.
Department of Interdisciplinary Engineering, Daegu Gyeongbuk Institute of Science & Technology (DGIST), Daegu 333, Korea.
Sensors (Basel). 2021 Apr 30;21(9):3124. doi: 10.3390/s21093124.
In autonomous driving, using a variety of sensors to recognize preceding vehicles at middle and long distances is helpful for improving driving performance and developing various functions. However, if only LiDAR or cameras are used in the recognition stage, it is difficult to obtain the necessary data due to the limitations of each sensor. In this paper, we proposed a method of converting the vision-tracked data into bird's eye-view (BEV) coordinates using an equation that projects LiDAR points onto an image and a method of fusion between LiDAR and vision-tracked data. Thus, the proposed method was effective through the results of detecting the closest in-path vehicle (CIPV) in various situations. In addition, even when experimenting with the EuroNCAP autonomous emergency braking (AEB) test protocol using the result of fusion, AEB performance was improved through improved cognitive performance than when using only LiDAR. In the experimental results, the performance of the proposed method was proven through actual vehicle tests in various scenarios. Consequently, it was convincing that the proposed sensor fusion method significantly improved the adaptive cruise control (ACC) function in autonomous maneuvering. We expect that this improvement in perception performance will contribute to improving the overall stability of ACC.
在自动驾驶中,使用各种传感器来识别中远距离的前车有助于提高驾驶性能和开发各种功能。然而,如果仅在识别阶段使用激光雷达或摄像头,由于每个传感器的局限性,很难获得必要的数据。在本文中,我们提出了一种使用将激光雷达点投影到图像上的方程将视觉跟踪数据转换为鸟瞰图 (BEV) 坐标的方法,以及一种激光雷达和视觉跟踪数据融合的方法。因此,通过在各种情况下检测最近的路径内车辆 (CIPV) 的结果证明了该方法的有效性。此外,即使在使用融合结果进行 EuroNCAP 自动紧急制动 (AEB) 测试协议的实验中,通过提高认知性能也比仅使用激光雷达提高了 AEB 性能。在实验结果中,通过在各种场景下的实际车辆测试证明了所提出方法的性能。因此,可以肯定的是,所提出的传感器融合方法显著提高了自动驾驶中的自适应巡航控制 (ACC) 功能。我们期望这种感知性能的提高将有助于提高 ACC 的整体稳定性。