Suppr超能文献

无需关键点描述符的快速ORB-SLAM

Fast ORB-SLAM Without Keypoint Descriptors.

作者信息

Fu Qiang, Yu Hongshan, Wang Xiaolong, Yang Zhengeng, He Yong, Zhang Hong, Mian Ajmal

出版信息

IEEE Trans Image Process. 2022;31:1433-1446. doi: 10.1109/TIP.2021.3136710. Epub 2022 Feb 3.

Abstract

Indirect methods for visual SLAM are gaining popularity due to their robustness to environmental variations. ORB-SLAM2 (Mur-Artal and Tardós, 2017) is a benchmark method in this domain, however, it consumes significant time for computing descriptors that never get reused unless a frame is selected as a keyframe. To overcome these problems, we present FastORB-SLAM which is light-weight and efficient as it tracks keypoints between adjacent frames without computing descriptors. To achieve this, a two stage descriptor-independent keypoint matching method is proposed based on sparse optical flow. In the first stage, we predict initial keypoint correspondences via a simple but effective motion model and then robustly establish the correspondences via pyramid-based sparse optical flow tracking. In the second stage, we leverage the constraints of the motion smoothness and epipolar geometry to refine the correspondences. In particular, our method computes descriptors only for keyframes. We test FastORB-SLAM on TUM and ICL-NUIM RGB-D datasets and compare its accuracy and efficiency to nine existing RGB-D SLAM methods. Qualitative and quantitative results show that our method achieves state-of-the-art accuracy and is about twice as fast as the ORB-SLAM2.

摘要

由于间接方法对环境变化具有鲁棒性,因此在视觉同步定位与地图构建(Visual SLAM)中越来越受欢迎。ORB-SLAM2(Mur-Artal和Tardós,2017)是该领域的一种基准方法,然而,它在计算描述符时会消耗大量时间,除非某一帧被选为关键帧,否则这些描述符不会被重复使用。为了克服这些问题,我们提出了FastORB-SLAM,它轻量级且高效,因为它在不计算描述符的情况下跟踪相邻帧之间的关键点。为了实现这一点,基于稀疏光流提出了一种两阶段的独立于描述符的关键点匹配方法。在第一阶段,我们通过一个简单但有效的运动模型预测初始关键点对应关系,然后通过基于金字塔的稀疏光流跟踪稳健地建立对应关系。在第二阶段,我们利用运动平滑性和极线几何的约束来优化对应关系。特别地,我们的方法仅为关键帧计算描述符。我们在TUM和ICL-NUIM RGB-D数据集上对FastORB-SLAM进行了测试,并将其准确性和效率与九种现有的RGB-D SLAM方法进行了比较。定性和定量结果表明,我们的方法达到了当前的最佳精度,并且速度大约是ORB-SLAM2的两倍。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验