• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种利用光流和特征点法的稳健且集成的视觉里程计框架。

A Robust and Integrated Visual Odometry Framework Exploiting the Optical Flow and Feature Point Method.

作者信息

Qiu Haiyang, Zhang Xu, Wang Hui, Xiang Dan, Xiao Mingming, Zhu Zhiyu, Wang Lei

机构信息

School of Naval Architecture and Ocean Engineering, Guangzhou Maritime University, Guangzhou 510725, China.

School of Automation, Jiangsu University of Science and Technology, Zhenjiang 212013, China.

出版信息

Sensors (Basel). 2023 Oct 23;23(20):8655. doi: 10.3390/s23208655.

DOI:10.3390/s23208655
PMID:37896748
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10611077/
Abstract

In this paper, we propose a robust and integrated visual odometry framework exploiting the optical flow and feature point method that achieves faster pose estimate and considerable accuracy and robustness during the odometry process. Our method utilizes optical flow tracking to accelerate the feature point matching process. In the odometry, two visual odometry methods are used: global feature point method and local feature point method. When there is good optical flow tracking and enough key points optical flow tracking matching is successful, the local feature point method utilizes prior information from the optical flow to estimate relative pose transformation information. In cases where there is poor optical flow tracking and only a small number of key points successfully match, the feature point method with a filtering mechanism is used for posing estimation. By coupling and correlating the two aforementioned methods, this visual odometry greatly accelerates the computation time for relative pose estimation. It reduces the computation time of relative pose estimation to 40% of that of the ORB_SLAM3 front-end odometry, while ensuring that it is not too different from the ORB_SLAM3 front-end odometry in terms of accuracy and robustness. The effectiveness of this method was validated and analyzed using the EUROC dataset within the ORB_SLAM3 open-source framework. The experimental results serve as supporting evidence for the efficacy of the proposed approach.

摘要

在本文中,我们提出了一个强大的集成视觉里程计框架,该框架利用光流和特征点方法,在里程计过程中实现更快的位姿估计以及相当高的准确性和鲁棒性。我们的方法利用光流跟踪来加速特征点匹配过程。在里程计中,使用了两种视觉里程计方法:全局特征点方法和局部特征点方法。当存在良好的光流跟踪且有足够的关键点时,光流跟踪匹配成功,局部特征点方法利用来自光流的先验信息来估计相对位姿变换信息。在光流跟踪不佳且只有少量关键点成功匹配的情况下,使用具有滤波机制的特征点方法进行位姿估计。通过将上述两种方法进行耦合和关联,这种视觉里程计大大加快了相对位姿估计的计算时间。它将相对位姿估计的计算时间减少到ORB_SLAM3前端里程计的40%,同时确保在准确性和鲁棒性方面与ORB_SLAM3前端里程计没有太大差异。在ORB_SLAM3开源框架内使用EUROC数据集对该方法的有效性进行了验证和分析。实验结果为所提方法的有效性提供了支持证据。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/bdcebb671a93/sensors-23-08655-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/fbbe31041873/sensors-23-08655-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/aa3b95a875af/sensors-23-08655-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/7f66873da783/sensors-23-08655-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/301aa5631a77/sensors-23-08655-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/8dba08c0c653/sensors-23-08655-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/2026a95e5bb3/sensors-23-08655-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/ae3185e5a38d/sensors-23-08655-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/a7ac908b3e25/sensors-23-08655-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/aeba99a5b5be/sensors-23-08655-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/6c9f54bfc857/sensors-23-08655-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/131ea58a0f76/sensors-23-08655-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/a02d09624a74/sensors-23-08655-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/234a3f782aa5/sensors-23-08655-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/5e725bf18191/sensors-23-08655-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/bdcebb671a93/sensors-23-08655-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/fbbe31041873/sensors-23-08655-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/aa3b95a875af/sensors-23-08655-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/7f66873da783/sensors-23-08655-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/301aa5631a77/sensors-23-08655-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/8dba08c0c653/sensors-23-08655-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/2026a95e5bb3/sensors-23-08655-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/ae3185e5a38d/sensors-23-08655-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/a7ac908b3e25/sensors-23-08655-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/aeba99a5b5be/sensors-23-08655-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/6c9f54bfc857/sensors-23-08655-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/131ea58a0f76/sensors-23-08655-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/a02d09624a74/sensors-23-08655-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/234a3f782aa5/sensors-23-08655-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/5e725bf18191/sensors-23-08655-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e06f/10611077/bdcebb671a93/sensors-23-08655-g015.jpg

相似文献

1
A Robust and Integrated Visual Odometry Framework Exploiting the Optical Flow and Feature Point Method.一种利用光流和特征点法的稳健且集成的视觉里程计框架。
Sensors (Basel). 2023 Oct 23;23(20):8655. doi: 10.3390/s23208655.
2
Unsupervised Monocular Visual Odometry for Fast-Moving Scenes Based on Optical Flow Network with Feature Point Matching Constraint.基于特征点匹配约束的光流网络的快速运动场景无监督单目视觉里程计。
Sensors (Basel). 2022 Dec 9;22(24):9647. doi: 10.3390/s22249647.
3
Robust Stereo Visual-Inertial Odometry Using Nonlinear Optimization.基于非线性优化的鲁棒立体视觉惯性里程计
Sensors (Basel). 2019 Aug 29;19(17):3747. doi: 10.3390/s19173747.
4
Fast and Robust Monocular Visua-Inertial Odometry Using Points and Lines.基于点和线的快速鲁棒单目视觉惯性里程计
Sensors (Basel). 2019 Oct 19;19(20):4545. doi: 10.3390/s19204545.
5
Monocular Visual-Inertial Odometry with an Unbiased Linear System Model and Robust Feature Tracking Front-End.具有无偏线性系统模型和鲁棒特征跟踪前端的单目视觉惯性里程计
Sensors (Basel). 2019 Apr 25;19(8):1941. doi: 10.3390/s19081941.
6
Mix-VIO: A Visual Inertial Odometry Based on a Hybrid Tracking Strategy.Mix-VIO:一种基于混合跟踪策略的视觉惯性里程计
Sensors (Basel). 2024 Aug 12;24(16):5218. doi: 10.3390/s24165218.
7
LRPL-VIO: A Lightweight and Robust Visual-Inertial Odometry with Point and Line Features.LRPL-VIO:一种具有点和线特征的轻量级鲁棒视觉惯性里程计。
Sensors (Basel). 2024 Feb 18;24(4):1322. doi: 10.3390/s24041322.
8
Event-based feature tracking in a visual inertial odometry framework.视觉惯性里程计框架中基于事件的特征跟踪
Front Robot AI. 2023 Feb 14;10:994488. doi: 10.3389/frobt.2023.994488. eCollection 2023.
9
A robust method for approximate visual robot localization in feature-sparse sewer pipes.一种用于在特征稀疏的下水道管道中进行近似视觉机器人定位的稳健方法。
Front Robot AI. 2023 Mar 6;10:1150508. doi: 10.3389/frobt.2023.1150508. eCollection 2023.
10
A Robust Parallel Initialization Method for Monocular Visual-Inertial SLAM.一种用于单目视觉惯性同步定位与地图构建的稳健并行初始化方法。
Sensors (Basel). 2022 Oct 29;22(21):8307. doi: 10.3390/s22218307.

本文引用的文献

1
MonoSLAM: real-time single camera SLAM.单目即时定位与地图构建(MonoSLAM):实时单目相机即时定位与地图构建
IEEE Trans Pattern Anal Mach Intell. 2007 Jun;29(6):1052-67. doi: 10.1109/TPAMI.2007.1049.