• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于目标的移动机器人可靠视觉导航

Object-Based Reliable Visual Navigation for Mobile Robot.

机构信息

Anhui Institute of Optics and Fine Mechanics, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China.

Science Island Branch of Graduate School, University of Science and Technology of China, Hefei 230026, China.

出版信息

Sensors (Basel). 2022 Mar 20;22(6):2387. doi: 10.3390/s22062387.

DOI:10.3390/s22062387
PMID:35336558
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8949785/
Abstract

Visual navigation is of vital importance for autonomous mobile robots. Most existing practical perception-aware based visual navigation methods generally require prior-constructed precise metric maps, and learning-based methods rely on large training to improve their generality. To improve the reliability of visual navigation, in this paper, we propose a novel object-level topological visual navigation method. Firstly, a lightweight object-level topological semantic map is constructed to release the dependence on the precise metric map, where the semantic associations between objects are stored via graph memory and topological organization is performed. Then, we propose an object-based heuristic graph search method to select the global topological path with the optimal and shortest characteristics. Furthermore, to reduce the global cumulative error, a global path segmentation strategy is proposed to divide the global topological path on the basis of active visual perception and object guidance. Finally, to achieve adaptive smooth trajectory generation, a Bernstein polynomial-based smooth trajectory refinement method is proposed by transforming trajectory generation into a nonlinear planning problem, achieving smooth multi-segment continuous navigation. Experimental results demonstrate the feasibility and efficiency of our method on both simulation and real-world scenarios. The proposed method also obtains better navigation success rate (SR) and success weighted by inverse path length (SPL) than the state-of-the-art methods.

摘要

视觉导航对于自主移动机器人至关重要。大多数现有的基于感知的实用视觉导航方法通常需要预先构建精确的度量地图,而基于学习的方法则依赖于大量的训练来提高其通用性。为了提高视觉导航的可靠性,本文提出了一种新颖的基于对象级的拓扑视觉导航方法。首先,构建了一个轻量级的基于对象级的拓扑语义地图,以释放对精确度量地图的依赖,其中通过图记忆存储对象之间的语义关联,并进行拓扑组织。然后,提出了一种基于对象的启发式图搜索方法,以选择具有最优和最短特征的全局拓扑路径。此外,为了减少全局累积误差,提出了一种基于全局路径分割策略,该策略基于主动视觉感知和对象引导对全局拓扑路径进行分割。最后,为了实现自适应平滑轨迹生成,提出了一种基于 Bernstein 多项式的平滑轨迹细化方法,通过将轨迹生成转化为非线性规划问题,实现了多段连续平滑导航。实验结果证明了该方法在模拟和真实场景中的可行性和效率。与最先进的方法相比,所提出的方法在导航成功率 (SR) 和基于逆路径长度的成功率 (SPL) 方面都取得了更好的结果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/019b3c744468/sensors-22-02387-g023.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/ac7da0ddab9c/sensors-22-02387-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/69718b0b40df/sensors-22-02387-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/c3b6e849c61a/sensors-22-02387-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/dc9f37367e3d/sensors-22-02387-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/0a4e79907ad5/sensors-22-02387-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/13efbd7261a3/sensors-22-02387-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/64fa0346209d/sensors-22-02387-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/2a5f286956cb/sensors-22-02387-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/e933ba6a13ed/sensors-22-02387-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/39847518f7e0/sensors-22-02387-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/e3af36e8f999/sensors-22-02387-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/80ed527d365f/sensors-22-02387-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/51fe2a7abdb5/sensors-22-02387-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/52def5de062a/sensors-22-02387-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/66e97eaed0f8/sensors-22-02387-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/6cce9bc2717d/sensors-22-02387-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/4140fcd61066/sensors-22-02387-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/a5d360c06089/sensors-22-02387-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/6642ab0d7ac5/sensors-22-02387-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/9befbb09c440/sensors-22-02387-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/5a5eda7dda28/sensors-22-02387-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/13afd00e79a3/sensors-22-02387-g022.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/019b3c744468/sensors-22-02387-g023.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/ac7da0ddab9c/sensors-22-02387-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/69718b0b40df/sensors-22-02387-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/c3b6e849c61a/sensors-22-02387-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/dc9f37367e3d/sensors-22-02387-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/0a4e79907ad5/sensors-22-02387-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/13efbd7261a3/sensors-22-02387-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/64fa0346209d/sensors-22-02387-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/2a5f286956cb/sensors-22-02387-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/e933ba6a13ed/sensors-22-02387-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/39847518f7e0/sensors-22-02387-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/e3af36e8f999/sensors-22-02387-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/80ed527d365f/sensors-22-02387-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/51fe2a7abdb5/sensors-22-02387-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/52def5de062a/sensors-22-02387-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/66e97eaed0f8/sensors-22-02387-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/6cce9bc2717d/sensors-22-02387-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/4140fcd61066/sensors-22-02387-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/a5d360c06089/sensors-22-02387-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/6642ab0d7ac5/sensors-22-02387-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/9befbb09c440/sensors-22-02387-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/5a5eda7dda28/sensors-22-02387-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/13afd00e79a3/sensors-22-02387-g022.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85ca/8949785/019b3c744468/sensors-22-02387-g023.jpg

相似文献

1
Object-Based Reliable Visual Navigation for Mobile Robot.基于目标的移动机器人可靠视觉导航
Sensors (Basel). 2022 Mar 20;22(6):2387. doi: 10.3390/s22062387.
2
Hierarchical path planning from speech instructions with spatial concept-based topometric semantic mapping.基于空间概念的拓扑语义映射的语音指令分层路径规划
Front Robot AI. 2024 Aug 1;11:1291426. doi: 10.3389/frobt.2024.1291426. eCollection 2024.
3
ITC: Infused Tangential Curves for Smooth 2D and 3D Navigation of Mobile Robots .ITC:用于移动机器人平滑 2D 和 3D 导航的注入切向曲线。
Sensors (Basel). 2019 Oct 10;19(20):4384. doi: 10.3390/s19204384.
4
A Context-Aware Navigation Framework for Ground Robots in Horticultural Environments.一种用于园艺环境中地面机器人的上下文感知导航框架。
Sensors (Basel). 2024 Jun 5;24(11):3663. doi: 10.3390/s24113663.
5
Research on Mobile Robot Navigation Method Based on Semantic Information.基于语义信息的移动机器人导航方法研究
Sensors (Basel). 2024 Jul 4;24(13):4341. doi: 10.3390/s24134341.
6
A Navigation Path Search and Optimization Method for Mobile Robots Based on the Rat Brain's Cognitive Mechanism.一种基于大鼠大脑认知机制的移动机器人导航路径搜索与优化方法
Biomimetics (Basel). 2023 Sep 14;8(5):427. doi: 10.3390/biomimetics8050427.
7
Vision-Based Robot Navigation through Combining Unsupervised Learning and Hierarchical Reinforcement Learning.基于视觉的机器人导航,通过结合无监督学习和分层强化学习。
Sensors (Basel). 2019 Apr 1;19(7):1576. doi: 10.3390/s19071576.
8
Prune-able fuzzy ART neural architecture for robot map learning and navigation in dynamic environments.用于动态环境中机器人地图学习与导航的可精简模糊ART神经架构。
IEEE Trans Neural Netw. 2006 Sep;17(5):1235-49. doi: 10.1109/TNN.2006.877534.
9
Deep reinforcement learning-aided autonomous navigation with landmark generators.基于地标生成器的深度强化学习辅助自主导航。
Front Neurorobot. 2023 Aug 22;17:1200214. doi: 10.3389/fnbot.2023.1200214. eCollection 2023.
10
Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision.结合视觉虫算法和基于熵的视觉技术实现无人机在室内环境中的导航与自我语义定位
Front Neurorobot. 2017 Aug 29;11:46. doi: 10.3389/fnbot.2017.00046. eCollection 2017.

引用本文的文献

1
Efficient Learning-Based Robotic Navigation Using Feature-Based RGB-D Pose Estimation and Topological Maps.基于特征的RGB-D位姿估计和拓扑地图的高效基于学习的机器人导航。
Entropy (Basel). 2025 Jun 15;27(6):641. doi: 10.3390/e27060641.
2
An Active Control Method for a Lower Limb Rehabilitation Robot with Human Motion Intention Recognition.一种具有人体运动意图识别功能的下肢康复机器人的主动控制方法
Sensors (Basel). 2025 Jan 24;25(3):713. doi: 10.3390/s25030713.
3
A Novel Obstacle Traversal Method for Multiple Robotic Fish Based on Cross-Modal Variational Autoencoders and Imitation Learning.

本文引用的文献

1
VoteNet: A Deep Learning Label Fusion Method for Multi-Atlas Segmentation.VoteNet:一种用于多图谱分割的深度学习标签融合方法。
Med Image Comput Comput Assist Interv. 2019 Oct;11766:202-210. doi: 10.1007/978-3-030-32248-9_23. Epub 2019 Oct 10.
2
Bernstein Polynomial-Based Method for Solving Optimal Trajectory Generation Problems.基于伯恩斯坦多项式的最优轨迹生成问题求解方法
Sensors (Basel). 2022 Feb 27;22(5):1869. doi: 10.3390/s22051869.
3
Path Planning for Autonomous Mobile Robots: A Review.自主移动机器人路径规划:综述。
一种基于跨模态变分自编码器和模仿学习的多机器人鱼新型障碍物穿越方法。
Biomimetics (Basel). 2024 Apr 8;9(4):221. doi: 10.3390/biomimetics9040221.
4
A Robotics Experimental Design Method Based on PDCA: A Case Study of Wall-Following Robots.一种基于PDCA的机器人实验设计方法:以壁面跟随机器人为例
Sensors (Basel). 2024 Mar 14;24(6):1869. doi: 10.3390/s24061869.
5
Cross-Domain Indoor Visual Place Recognition for Mobile Robot via Generalization Using Style Augmentation.跨领域室内视觉地点识别的移动机器人通过使用风格增强的泛化。
Sensors (Basel). 2023 Jul 4;23(13):6134. doi: 10.3390/s23136134.
6
Particle Swarm Algorithm Path-Planning Method for Mobile Robots Based on Artificial Potential Fields.基于人工势场的移动机器人粒子群算法路径规划方法。
Sensors (Basel). 2023 Jul 1;23(13):6082. doi: 10.3390/s23136082.
7
Diversity Learning Based on Multi-Latent Space for Medical Image Visual Question Generation.基于多潜在空间的医学图像视觉问题生成的多样性学习。
Sensors (Basel). 2023 Jan 17;23(3):1057. doi: 10.3390/s23031057.
Sensors (Basel). 2021 Nov 26;21(23):7898. doi: 10.3390/s21237898.
4
Deep Learning for 3D Point Clouds: A Survey.用于三维点云的深度学习:综述
IEEE Trans Pattern Anal Mach Intell. 2021 Dec;43(12):4338-4364. doi: 10.1109/TPAMI.2020.3005434. Epub 2021 Nov 3.
5
VINS-MKF:A Tightly-Coupled Multi-Keyframe Visual-Inertial Odometry for Accurate and Robust State Estimation.VINS-MKF:一种紧耦合多关键帧视觉惯性里程计,用于实现精确和鲁棒的状态估计。
Sensors (Basel). 2018 Nov 19;18(11):4036. doi: 10.3390/s18114036.