Institute of Applied Technology, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China.
Science Island Branch of Graduate School, University of Science and Technology of China, Hefei 230026, China.
Sensors (Basel). 2018 Nov 19;18(11):4036. doi: 10.3390/s18114036.
State estimation is crucial for robot autonomy, visual odometry (VO) has received significant attention in the robotics field because it can provide accurate state estimation. However, the accuracy and robustness of most existing VO methods are degraded in complex conditions, due to the limited field of view (FOV) of the utilized camera. In this paper, we present a novel tightly-coupled multi-keyframe visual-inertial odometry (called VINS-MKF), which can provide an accurate and robust state estimation for robots in an indoor environment. We first modify the monocular ORBSLAM (Oriented FAST and Rotated BRIEF Simultaneous Localization and Mapping) to multiple fisheye cameras alongside an inertial measurement unit (IMU) to provide large FOV visual-inertial information. Then, a novel VO framework is proposed to ensure the efficiency of state estimation, by adopting a GPU (Graphics Processing Unit) based feature extraction method and parallelizing the feature extraction thread that is separated from the tracking thread with the mapping thread. Finally, a nonlinear optimization method is formulated for accurate state estimation, which is characterized as being multi-keyframe, tightly-coupled and visual-inertial. In addition, accurate initialization and a novel MultiCol-IMU camera model are coupled to further improve the performance of VINS-MKF. To the best of our knowledge, it's the first tightly-coupled multi-keyframe visual-inertial odometry that joins measurements from multiple fisheye cameras and IMU. The performance of the VINS-MKF was validated by extensive experiments using home-made datasets, and it showed improved accuracy and robustness over the state-of-art VINS-Mono.
状态估计对于机器人自主性至关重要,视觉里程计(VO)在机器人领域受到了广泛关注,因为它可以提供准确的状态估计。然而,由于所使用相机的有限视野(FOV),大多数现有的 VO 方法的准确性和鲁棒性在复杂条件下会降低。在本文中,我们提出了一种新的紧耦合多关键帧视觉惯性里程计(称为 VINS-MKF),它可以为室内环境中的机器人提供准确和鲁棒的状态估计。我们首先将单目 ORBSLAM(定向 FAST 和旋转 BRIEF 同时定位和制图)修改为多个鱼眼相机和惯性测量单元(IMU),以提供大 FOV 视觉惯性信息。然后,提出了一种新的 VO 框架,通过采用基于 GPU(图形处理单元)的特征提取方法和并行化与跟踪线程分离的特征提取线程以及与映射线程并行化的特征提取线程,来确保状态估计的效率。最后,提出了一种用于准确状态估计的非线性优化方法,其特点是多关键帧、紧耦合和视觉惯性。此外,准确的初始化和新的 MultiCol-IMU 相机模型被耦合以进一步提高 VINS-MKF 的性能。据我们所知,这是第一个加入来自多个鱼眼相机和 IMU 的测量值的紧耦合多关键帧视觉惯性里程计。通过使用自制数据集进行的广泛实验验证了 VINS-MKF 的性能,它在准确性和鲁棒性方面优于最先进的 VINS-Mono。