Cui Ziyang, Wang Yi, Chen Xiaodong, Cai Huaiyu
Key Laboratory of Opto-Electronics Information Technology of Ministry of Education, College of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin 300072, China.
Sensors (Basel). 2025 Jul 23;25(15):4558. doi: 10.3390/s25154558.
An accurate extrinsic calibration between LiDAR and cameras is essential for effective sensor fusion, directly impacting the perception capabilities of autonomous driving systems. Although prior calibration approaches using planar and point features have yielded some success, they suffer from inherent limitations. Specifically, methods that rely on fitting planar contours using depth-discontinuous points are prone to systematic errors, which hinder the precise extraction of the 3D positions of feature points. This, in turn, compromises the accuracy and robustness of the calibration. To overcome these challenges, this paper introduces a novel 3D calibration plate incorporating the gradient depth, localization markers, and corner features. At the point cloud level, the gradient depth enables the accurate estimation of the 3D coordinates of feature points. At the image level, corner features and localization markers facilitate the rapid and precise acquisition of 2D pixel coordinates, with minimal interference from environmental noise. This method establishes a rigorous and systematic framework to enhance the accuracy of LiDAR-camera extrinsic calibrations. In a simulated environment, experimental results demonstrate that the proposed algorithm achieves a rotation error below 0.002 radians and a translation error below 0.005 m.
激光雷达与相机之间精确的外部校准对于有效的传感器融合至关重要,直接影响自动驾驶系统的感知能力。尽管先前使用平面和点特征的校准方法取得了一些成功,但它们存在固有的局限性。具体而言,依赖于使用深度不连续点拟合平面轮廓的方法容易产生系统误差,这阻碍了特征点三维位置的精确提取。反过来,这会损害校准的准确性和鲁棒性。为了克服这些挑战,本文引入了一种新颖的三维校准板,它结合了梯度深度、定位标记和角点特征。在点云层面,梯度深度能够精确估计特征点的三维坐标。在图像层面,角点特征和定位标记有助于快速精确地获取二维像素坐标,且受环境噪声的干扰最小。该方法建立了一个严谨且系统的框架,以提高激光雷达与相机外部校准的准确性。在模拟环境中,实验结果表明,所提出的算法实现了低于0.002弧度的旋转误差和低于0.005米的平移误差。