Suppr超能文献

VINS-MKF:一种紧耦合多关键帧视觉惯性里程计,用于实现精确和鲁棒的状态估计。

VINS-MKF:A Tightly-Coupled Multi-Keyframe Visual-Inertial Odometry for Accurate and Robust State Estimation.

机构信息

Institute of Applied Technology, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China.

Science Island Branch of Graduate School, University of Science and Technology of China, Hefei 230026, China.

出版信息

Sensors (Basel). 2018 Nov 19;18(11):4036. doi: 10.3390/s18114036.

Abstract

State estimation is crucial for robot autonomy, visual odometry (VO) has received significant attention in the robotics field because it can provide accurate state estimation. However, the accuracy and robustness of most existing VO methods are degraded in complex conditions, due to the limited field of view (FOV) of the utilized camera. In this paper, we present a novel tightly-coupled multi-keyframe visual-inertial odometry (called VINS-MKF), which can provide an accurate and robust state estimation for robots in an indoor environment. We first modify the monocular ORBSLAM (Oriented FAST and Rotated BRIEF Simultaneous Localization and Mapping) to multiple fisheye cameras alongside an inertial measurement unit (IMU) to provide large FOV visual-inertial information. Then, a novel VO framework is proposed to ensure the efficiency of state estimation, by adopting a GPU (Graphics Processing Unit) based feature extraction method and parallelizing the feature extraction thread that is separated from the tracking thread with the mapping thread. Finally, a nonlinear optimization method is formulated for accurate state estimation, which is characterized as being multi-keyframe, tightly-coupled and visual-inertial. In addition, accurate initialization and a novel MultiCol-IMU camera model are coupled to further improve the performance of VINS-MKF. To the best of our knowledge, it's the first tightly-coupled multi-keyframe visual-inertial odometry that joins measurements from multiple fisheye cameras and IMU. The performance of the VINS-MKF was validated by extensive experiments using home-made datasets, and it showed improved accuracy and robustness over the state-of-art VINS-Mono.

摘要

状态估计对于机器人自主性至关重要,视觉里程计(VO)在机器人领域受到了广泛关注,因为它可以提供准确的状态估计。然而,由于所使用相机的有限视野(FOV),大多数现有的 VO 方法的准确性和鲁棒性在复杂条件下会降低。在本文中,我们提出了一种新的紧耦合多关键帧视觉惯性里程计(称为 VINS-MKF),它可以为室内环境中的机器人提供准确和鲁棒的状态估计。我们首先将单目 ORBSLAM(定向 FAST 和旋转 BRIEF 同时定位和制图)修改为多个鱼眼相机和惯性测量单元(IMU),以提供大 FOV 视觉惯性信息。然后,提出了一种新的 VO 框架,通过采用基于 GPU(图形处理单元)的特征提取方法和并行化与跟踪线程分离的特征提取线程以及与映射线程并行化的特征提取线程,来确保状态估计的效率。最后,提出了一种用于准确状态估计的非线性优化方法,其特点是多关键帧、紧耦合和视觉惯性。此外,准确的初始化和新的 MultiCol-IMU 相机模型被耦合以进一步提高 VINS-MKF 的性能。据我们所知,这是第一个加入来自多个鱼眼相机和 IMU 的测量值的紧耦合多关键帧视觉惯性里程计。通过使用自制数据集进行的广泛实验验证了 VINS-MKF 的性能,它在准确性和鲁棒性方面优于最先进的 VINS-Mono。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cad6/6263887/9208dd1fee9f/sensors-18-04036-g002.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验