Wei Ming, Zhu Ming, Zhang Yaoyuan, Wang Jiarong, Sun Jiaqi
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, China.
University of Chinese Academy of Sciences, Beijing, China.
Front Neurorobot. 2023 Apr 18;17:1124676. doi: 10.3389/fnbot.2023.1124676. eCollection 2023.
The integration of multiple sensors is a crucial and emerging trend in the development of autonomous driving technology. The depth image obtained by stereo matching of the binocular camera is easily influenced by environment and distance. The point cloud of LiDAR has strong penetrability. However, it is much sparser than binocular images. LiDAR-stereo fusion can neutralize the advantages of the two sensors and maximize the acquisition of reliable three-dimensional information to improve the safety of automatic driving. Cross-sensor fusion is a key issue in the development of autonomous driving technology. This study proposed a real-time LiDAR-stereo depth completion network without 3D convolution to fuse point clouds and binocular images using injection guidance. At the same time, a kernel-connected spatial propagation network was utilized to refine the depth. The output of dense 3D information is more accurate for autonomous driving. Experimental results on the KITTI dataset showed that our method used real-time techniques effectively. Further, we demonstrated our solution's ability to address sensor defects and challenging environmental conditions using the p-KITTI dataset.
多传感器融合是自动驾驶技术发展中一个关键且新兴的趋势。通过双目相机立体匹配获得的深度图像很容易受到环境和距离的影响。激光雷达的点云具有很强的穿透性。然而,它比双目图像稀疏得多。激光雷达 - 立体融合可以中和两种传感器的优势,最大限度地获取可靠的三维信息,以提高自动驾驶的安全性。跨传感器融合是自动驾驶技术发展中的一个关键问题。本研究提出了一种无需3D卷积的实时激光雷达 - 立体深度补全网络,利用注入引导融合点云和双目图像。同时,利用内核连接的空间传播网络来细化深度。密集3D信息的输出对于自动驾驶更为准确。在KITTI数据集上的实验结果表明,我们的方法有效地使用了实时技术。此外,我们使用p - KITTI数据集展示了我们的解决方案解决传感器缺陷和应对具有挑战性的环境条件的能力。