Suppr超能文献

一种基于深度学习的激光雷达-相机联合校准算法

A LiDAR-Camera Joint Calibration Algorithm Based on Deep Learning.

作者信息

Ren Fujie, Liu Haibin, Wang Huanjie

机构信息

College of Mechanical and Energy Engineering, Beijing University of Technology, Beijing 100124, China.

出版信息

Sensors (Basel). 2024 Sep 18;24(18):6033. doi: 10.3390/s24186033.

Abstract

Multisensor (MS) data fusion is important for improving the stability of vehicle environmental perception systems. MS joint calibration is a prerequisite for the fusion of multimodality sensors. Traditional calibration methods based on calibration boards require the manual extraction of many features and manual registration, resulting in a cumbersome calibration process and significant errors. A joint calibration algorithm for a Light Laser Detection and Ranging (LiDAR) and camera is proposed based on deep learning without the need for other special calibration objects. A network model constructed based on deep learning can automatically capture object features in the environment and complete the calibration by matching and calculating object features. A mathematical model was constructed for joint LiDAR-camera calibration, and the process of sensor joint calibration was analyzed in detail. By constructing a deep-learning-based network model to determine the parameters of the rotation matrix and translation matrix, the relative spatial positions of the two sensors were determined to complete the joint calibration. The network model consists of three parts: a feature extraction module, a feature-matching module, and a feature aggregation module. The feature extraction module extracts the image features of color and depth images, the feature-matching module calculates the correlation between the two, and the feature aggregation module determines the calibration matrix parameters. The proposed algorithm was validated and tested on the KITTI-odometry dataset and compared with other advanced algorithms. The experimental results show that the average translation error of the calibration algorithm is 0.26 cm, and the average rotation error is 0.02°. The calibration error is lower than those of other advanced algorithms.

摘要

多传感器(MS)数据融合对于提高车辆环境感知系统的稳定性至关重要。MS联合校准是多模态传感器融合的前提条件。基于校准板的传统校准方法需要手动提取许多特征并进行手动配准,导致校准过程繁琐且误差较大。提出了一种基于深度学习的激光雷达(LiDAR)和相机联合校准算法,无需其他特殊校准物体。基于深度学习构建的网络模型可以自动捕捉环境中的物体特征,并通过匹配和计算物体特征来完成校准。构建了用于LiDAR与相机联合校准的数学模型,并详细分析了传感器联合校准的过程。通过构建基于深度学习的网络模型来确定旋转矩阵和平移矩阵的参数,从而确定两个传感器的相对空间位置以完成联合校准。该网络模型由三个部分组成:特征提取模块、特征匹配模块和特征聚合模块。特征提取模块提取彩色图像和深度图像的图像特征,特征匹配模块计算两者之间的相关性,特征聚合模块确定校准矩阵参数。所提出的算法在KITTI里程计数据集上进行了验证和测试,并与其他先进算法进行了比较。实验结果表明,该校准算法的平均平移误差为0.26厘米,平均旋转误差为0.02°。校准误差低于其他先进算法。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验