• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

室内环境中基于场景感知的移动机器人视觉导航

Scene perception based visual navigation of mobile robot in indoor environment.

作者信息

Ran T, Yuan L, Zhang J B

机构信息

School of Mechanical Engineering, Xinjiang University, Urumqi, China.

School of Mechanical Engineering, Xinjiang University, Urumqi, China; Beijing Advanced Innovation Center for Soft Matter Science and Engineering, Beijing University of Chemical Technology, Beijing, China.

出版信息

ISA Trans. 2021 Mar;109:389-400. doi: 10.1016/j.isatra.2020.10.023. Epub 2020 Oct 12.

DOI:10.1016/j.isatra.2020.10.023
PMID:33069374
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7550175/
Abstract

Only vision-based navigation is the key of cost reduction and widespread application of indoor mobile robot. Consider the unpredictable nature of artificial environments, deep learning techniques can be used to perform navigation with its strong ability to abstract image features. In this paper, we proposed a low-cost way of only vision-based perception to realize indoor mobile robot navigation, converting the problem of visual navigation to scene classification. Existing related research based on deep scene classification network has lower accuracy and brings more computational burden. Additionally, the navigation system has not yet been fully assessed in the previous work. Therefore, we designed a shallow convolutional neural network (CNN) with higher scene classification accuracy and efficiency to process images captured by a monocular camera. Besides, we proposed an adaptive weighted control (AWC) algorithm and combined with regular control (RC) to improve the robot's motion performance. We demonstrated the capability and robustness of the proposed navigation method by performing extensive experiments in both static and dynamic unknown environments. The qualitative and quantitative results showed that the system performs better compared to previous related work in unknown environments.

摘要

仅基于视觉的导航是降低室内移动机器人成本并实现广泛应用的关键。考虑到人工环境的不可预测性,深度学习技术凭借其强大的图像特征抽象能力可用于执行导航。在本文中,我们提出了一种仅基于视觉感知的低成本方法来实现室内移动机器人导航,将视觉导航问题转化为场景分类问题。现有的基于深度场景分类网络的相关研究准确性较低且带来更多计算负担。此外,导航系统在先前的工作中尚未得到充分评估。因此,我们设计了一个具有更高场景分类准确性和效率的浅层卷积神经网络(CNN)来处理单目相机捕获的图像。此外,我们提出了一种自适应加权控制(AWC)算法,并与常规控制(RC)相结合以提高机器人的运动性能。我们通过在静态和动态未知环境中进行广泛实验,展示了所提出导航方法的能力和鲁棒性。定性和定量结果表明,该系统在未知环境中比先前的相关工作表现更好。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/3b177c893b8b/gr24_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/4ea40a5c7006/gr1_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/73dcc52329cc/gr2_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/7cc99e3b8f25/gr3_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/c49afffb6d2a/gr4_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/dd5bb2563760/gr5_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/637cfcc475f3/fx1001_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/fab604a723dc/gr6_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/9f4a17cadb86/gr7_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/c992098c6dc5/gr8_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/db64c12dfa9f/gr9_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/d3127364e19b/gr10_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/e31bbf108e9e/gr11_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/4d3331e74036/gr12_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/29820f4d745f/gr13_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/d7ea62d01ce5/gr14_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/3bb54a412584/gr15_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/babacfd8094e/gr16_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/c019435645e7/gr17_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/ad59fe70ad4f/gr18_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/2ee48f0efe05/gr19_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/7748159fcdab/gr20_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/8326d8fa3ae5/gr21_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/16ded71292f2/gr22_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/a0638c17bddf/gr23_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/3b177c893b8b/gr24_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/4ea40a5c7006/gr1_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/73dcc52329cc/gr2_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/7cc99e3b8f25/gr3_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/c49afffb6d2a/gr4_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/dd5bb2563760/gr5_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/637cfcc475f3/fx1001_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/fab604a723dc/gr6_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/9f4a17cadb86/gr7_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/c992098c6dc5/gr8_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/db64c12dfa9f/gr9_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/d3127364e19b/gr10_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/e31bbf108e9e/gr11_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/4d3331e74036/gr12_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/29820f4d745f/gr13_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/d7ea62d01ce5/gr14_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/3bb54a412584/gr15_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/babacfd8094e/gr16_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/c019435645e7/gr17_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/ad59fe70ad4f/gr18_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/2ee48f0efe05/gr19_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/7748159fcdab/gr20_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/8326d8fa3ae5/gr21_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/16ded71292f2/gr22_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/a0638c17bddf/gr23_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8594/7550175/3b177c893b8b/gr24_lrg.jpg

相似文献

1
Scene perception based visual navigation of mobile robot in indoor environment.室内环境中基于场景感知的移动机器人视觉导航
ISA Trans. 2021 Mar;109:389-400. doi: 10.1016/j.isatra.2020.10.023. Epub 2020 Oct 12.
2
Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images.基于卷积神经网络的未校准球面图像机器人导航
Sensors (Basel). 2017 Jun 12;17(6):1341. doi: 10.3390/s17061341.
3
Improved Hybrid Model for Obstacle Detection and Avoidance in Robot Operating System Framework (Rapidly Exploring Random Tree and Dynamic Windows Approach).机器人操作系统框架(快速探索随机树和动态窗口方法)中用于障碍物检测与规避的改进混合模型
Sensors (Basel). 2024 Apr 2;24(7):2262. doi: 10.3390/s24072262.
4
A Novel Artificial Organic Control System for Mobile Robot Navigation in Assisted Living Using Vision- and Neural-Based Strategies.一种基于视觉和神经的新型人工有机控制系统,用于辅助生活中的移动机器人导航。
Comput Intell Neurosci. 2018 Dec 2;2018:4189150. doi: 10.1155/2018/4189150. eCollection 2018.
5
Computer Vision Positioning and Local Obstacle Avoidance Optimization Based on Neural Network Algorithm.基于神经网络算法的计算机视觉定位与局部避障优化。
Comput Intell Neurosci. 2022 Apr 1;2022:3061910. doi: 10.1155/2022/3061910. eCollection 2022.
6
A Doorway Detection and Direction (3Ds) System for Social Robots via a Monocular Camera.基于单目相机的社交机器人门道检测与方向(3Ds)系统。
Sensors (Basel). 2020 Apr 27;20(9):2477. doi: 10.3390/s20092477.
7
Path planning and collision avoidance methods for distributed multi-robot systems in complex dynamic environments.复杂动态环境下分布式多机器人系统的路径规划与避碰方法
Math Biosci Eng. 2023 Jan;20(1):145-178. doi: 10.3934/mbe.2023008. Epub 2022 Sep 30.
8
Autonomous Navigation by Mobile Robot with Sensor Fusion Based on Deep Reinforcement Learning.基于深度强化学习的传感器融合移动机器人自主导航
Sensors (Basel). 2024 Jun 16;24(12):3895. doi: 10.3390/s24123895.
9
Monocular camera and laser based semantic mapping system with temporal-spatial data association for indoor mobile robots.基于单目相机和激光的室内移动机器人时空数据关联语义映射系统。
Multimed Tools Appl. 2023 Mar 7:1-26. doi: 10.1007/s11042-023-14796-1.
10
Vision-based omnidirectional indoor robots for autonomous navigation and localization in manufacturing industry.用于制造业自主导航与定位的基于视觉的全向室内机器人。
Heliyon. 2024 Feb 13;10(4):e26042. doi: 10.1016/j.heliyon.2024.e26042. eCollection 2024 Feb 29.

本文引用的文献

1
Attitude measurement based on imaging ray tracking model and orthographic projection with iteration algorithm.基于成像射线跟踪模型和正交投影的迭代算法的姿态测量。
ISA Trans. 2019 Dec;95:379-391. doi: 10.1016/j.isatra.2019.05.009. Epub 2019 May 16.
2
Stacked Deconvolutional Network for Semantic Segmentation.用于语义分割的堆叠去卷积网络
IEEE Trans Image Process. 2019 Jan 25. doi: 10.1109/TIP.2019.2895460.
3
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.更快的 R-CNN:基于区域建议网络的实时目标检测。
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149. doi: 10.1109/TPAMI.2016.2577031. Epub 2016 Jun 6.
4
An Intelligent Robotic Hospital Bed for Safe Transportation of Critical Neurosurgery Patients Along Crowded Hospital Corridors.一种用于在拥挤的医院走廊安全运送重症神经外科患者的智能机器人病床。
IEEE Trans Neural Syst Rehabil Eng. 2015 Sep;23(5):744-54. doi: 10.1109/TNSRE.2014.2347377. Epub 2014 Aug 15.
5
MonoSLAM: real-time single camera SLAM.单目即时定位与地图构建(MonoSLAM):实时单目相机即时定位与地图构建
IEEE Trans Pattern Anal Mach Intell. 2007 Jun;29(6):1052-67. doi: 10.1109/TPAMI.2007.1049.