Research Institute for Smart Cities, School of Architecture and Urban Planning, Shenzhen University, Shenzhen 518060, China.
Department of Land Surveying & Geo-Informatics, The Hong Kong Polytechnic University, Hung Hom 999077, Hong Kong, China.
Sensors (Basel). 2018 May 1;18(5):1385. doi: 10.3390/s18051385.
Traditionally, visual-based RGB-D SLAM systems only use correspondences with valid depth values for camera tracking, thus ignoring the regions without 3D information. Due to the strict limitation on measurement distance and view angle, such systems adopt only short-range constraints which may introduce larger drift errors during long-distance unidirectional tracking. In this paper, we propose a novel geometric integration method that makes use of both 2D and 3D correspondences for RGB-D tracking. Our method handles the problem by exploring visual features both when depth information is available and when it is unknown. The system comprises two parts: coarse pose tracking with 3D correspondences, and geometric integration with hybrid correspondences. First, the coarse pose tracking generates the initial camera pose using 3D correspondences with frame-by-frame registration. The initial camera poses are then used as inputs for the geometric integration model, along with 3D correspondences, 2D-3D correspondences and 2D correspondences identified from frame pairs. The initial 3D location of the correspondence is determined in two ways, from depth image and by using the initial poses to triangulate. The model improves the camera poses and decreases drift error during long-distance RGB-D tracking iteratively. Experiments were conducted using data sequences collected by commercial Structure Sensors. The results verify that the geometric integration of hybrid correspondences effectively decreases the drift error and improves mapping accuracy. Furthermore, the model enables a comparative and synergistic use of datasets, including both 2D and 3D features.
传统的基于视觉的 RGB-D SLAM 系统仅使用具有有效深度值的对应关系进行相机跟踪,从而忽略了没有 3D 信息的区域。由于对测量距离和视角的严格限制,此类系统仅采用短距离约束,这在长距离单向跟踪过程中可能会引入更大的漂移误差。在本文中,我们提出了一种新颖的几何集成方法,该方法利用 RGB-D 跟踪的 2D 和 3D 对应关系。我们的方法通过探索深度信息可用和不可用时的视觉特征来处理该问题。该系统包括两部分:使用 3D 对应关系进行的粗粒度姿态跟踪,以及使用混合对应关系进行的几何集成。首先,粗粒度姿态跟踪使用逐帧注册的 3D 对应关系生成初始相机姿态。然后,将初始相机姿态用作几何集成模型的输入,以及 3D 对应关系、2D-3D 对应关系和从帧对中识别出的 2D 对应关系。对应关系的初始 3D 位置有两种确定方式,一种是从深度图像中确定,另一种是使用初始姿态进行三角测量。该模型通过迭代改进相机姿态并减少长距离 RGB-D 跟踪中的漂移误差。使用商业结构传感器采集的数据序列进行了实验。结果验证了混合对应关系的几何集成有效地减少了漂移误差并提高了映射精度。此外,该模型还能够对包括 2D 和 3D 特征在内的数据集进行比较和协同使用。