• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用全向成像和全局外观估计移动机器人相对于轨迹的位置和方向。

Estimating the position and orientation of a mobile robot with respect to a trajectory using omnidirectional imaging and global appearance.

作者信息

Payá Luis, Reinoso Oscar, Jiménez Luis M, Juliá Miguel

机构信息

Department of Systems Engineering and Automation, Miguel Hernandez University, Elche, Alicante, Spain.

Q-Bot Limited, Block G, Riverside Business Centre, Bendon Valley, London, United Kingdom.

出版信息

PLoS One. 2017 May 2;12(5):e0175938. doi: 10.1371/journal.pone.0175938. eCollection 2017.

DOI:10.1371/journal.pone.0175938
PMID:28464032
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC5413056/
Abstract

Along the past years, mobile robots have proliferated both in domestic and in industrial environments to solve some tasks such as cleaning, assistance, or material transportation. One of their advantages is the ability to operate in wide areas without the necessity of introducing changes into the existing infrastructure. Thanks to the sensors they may be equipped with and their processing systems, mobile robots constitute a versatile alternative to solve a wide range of applications. When designing the control system of a mobile robot so that it carries out a task autonomously in an unknown environment, it is expected to take decisions about its localization in the environment and about the trajectory that it has to follow in order to arrive to the target points. More concisely, the robot has to find a relatively good solution to two crucial problems: building a model of the environment, and estimating the position of the robot within this model. In this work, we propose a framework to solve these problems using only visual information. The mobile robot is equipped with a catadioptric vision sensor that provides omnidirectional images from the environment. First, the robot goes along the trajectories to include in the model and uses the visual information captured to build this model. After that, the robot is able to estimate its position and orientation with respect to the trajectory. Among the possible approaches to solve these problems, global appearance techniques are used in this work. They have emerged recently as a robust and efficient alternative compared to landmark extraction techniques. A global description method based on Radon Transform is used to design mapping and localization algorithms and a set of images captured by a mobile robot in a real environment, under realistic operation conditions, is used to test the performance of these algorithms.

摘要

在过去几年中,移动机器人在家庭和工业环境中大量涌现,以解决诸如清洁、协助或物料运输等任务。它们的优点之一是能够在广阔区域运行,而无需对现有基础设施进行改造。得益于其配备的传感器及其处理系统,移动机器人成为解决广泛应用的通用选择。在设计移动机器人的控制系统,使其在未知环境中自主执行任务时,期望它能对自身在环境中的定位以及到达目标点所需遵循的轨迹做出决策。更简洁地说,机器人必须找到两个关键问题的相对较好的解决方案:构建环境模型,以及估计机器人在该模型中的位置。在这项工作中,我们提出了一个仅使用视觉信息来解决这些问题的框架。移动机器人配备了一个折反射视觉传感器,可提供来自环境的全向图像。首先,机器人沿着要纳入模型的轨迹行进,并利用捕获的视觉信息构建该模型。之后,机器人能够估计其相对于轨迹的位置和方向。在解决这些问题的可能方法中,本工作采用了全局外观技术。与地标提取技术相比,它们最近已成为一种强大而高效的替代方法。基于拉东变换的全局描述方法用于设计映射和定位算法,并使用移动机器人在实际运行条件下于真实环境中捕获的一组图像来测试这些算法的性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ae9b/5413056/b60b8b007081/pone.0175938.g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ae9b/5413056/c16c16638ebc/pone.0175938.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ae9b/5413056/558c10151b92/pone.0175938.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ae9b/5413056/15bf2771d1cf/pone.0175938.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ae9b/5413056/8ce9a80692c6/pone.0175938.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ae9b/5413056/b99863d50b14/pone.0175938.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ae9b/5413056/b4cc5c658a34/pone.0175938.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ae9b/5413056/360a4ce5556d/pone.0175938.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ae9b/5413056/eb7661306c49/pone.0175938.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ae9b/5413056/f46cb00ee924/pone.0175938.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ae9b/5413056/b84471967829/pone.0175938.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ae9b/5413056/aa3c71d730dc/pone.0175938.g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ae9b/5413056/b60b8b007081/pone.0175938.g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ae9b/5413056/c16c16638ebc/pone.0175938.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ae9b/5413056/558c10151b92/pone.0175938.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ae9b/5413056/15bf2771d1cf/pone.0175938.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ae9b/5413056/8ce9a80692c6/pone.0175938.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ae9b/5413056/b99863d50b14/pone.0175938.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ae9b/5413056/b4cc5c658a34/pone.0175938.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ae9b/5413056/360a4ce5556d/pone.0175938.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ae9b/5413056/eb7661306c49/pone.0175938.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ae9b/5413056/f46cb00ee924/pone.0175938.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ae9b/5413056/b84471967829/pone.0175938.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ae9b/5413056/aa3c71d730dc/pone.0175938.g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ae9b/5413056/b60b8b007081/pone.0175938.g012.jpg

相似文献

1
Estimating the position and orientation of a mobile robot with respect to a trajectory using omnidirectional imaging and global appearance.使用全向成像和全局外观估计移动机器人相对于轨迹的位置和方向。
PLoS One. 2017 May 2;12(5):e0175938. doi: 10.1371/journal.pone.0175938. eCollection 2017.
2
Position estimation and local mapping using omnidirectional images and global appearance descriptors.使用全向图像和全局外观描述符进行位置估计和局部映射。
Sensors (Basel). 2015 Oct 16;15(10):26368-95. doi: 10.3390/s151026368.
3
The Role of Global Appearance of Omnidirectional Images in Relative Distance and Orientation Retrieval.全方位图像全局外观在相对距离和方向检索中的作用。
Sensors (Basel). 2021 May 11;21(10):3327. doi: 10.3390/s21103327.
4
Performance of global-appearance descriptors in map building and localization using omnidirectional vision.基于全景视觉的全局外观描述符在地图构建和定位中的性能。
Sensors (Basel). 2014 Feb 14;14(2):3033-64. doi: 10.3390/s140203033.
5
The Synthetic Moth: A Neuromorphic Approach toward Artificial Olfaction in Robots合成蛾:一种用于机器人人工嗅觉的神经形态方法
6
Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.基于自适应算法的全向视觉移动机器人定位估计。
IEEE Trans Cybern. 2015 Aug;45(8):1633-46. doi: 10.1109/TCYB.2014.2357797. Epub 2014 Sep 23.
7
Estimation of visual maps with a robot network equipped with vision sensors.利用配备视觉传感器的机器人网络来估计视觉地图。
Sensors (Basel). 2010;10(5):5209-32. doi: 10.3390/s100505209. Epub 2010 May 25.
8
Building an Enhanced Vocabulary of the Robot Environment with a Ceiling Pointing Camera.利用天花板指向摄像头构建机器人环境的增强词汇表。
Sensors (Basel). 2016 Apr 7;16(4):493. doi: 10.3390/s16040493.
9
Map building and monte carlo localization using global appearance of omnidirectional images.基于全景图像全局外观的地图构建与蒙特卡罗定位。
Sensors (Basel). 2010;10(12):11468-97. doi: 10.3390/s101211468. Epub 2010 Dec 14.
10
Adaptive Trajectory Tracking of Nonholonomic Mobile Robots Using Vision-Based Position and Velocity Estimation.基于视觉的位置和速度估计的非完整移动机器人自适应轨迹跟踪。
IEEE Trans Cybern. 2018 Feb;48(2):571-582. doi: 10.1109/TCYB.2016.2646719. Epub 2017 Jan 13.

引用本文的文献

1
Optimizing Appearance-Based Localization with Catadioptric Cameras: Small-Footprint Models for Real-Time Inference on Edge Devices.使用折反射相机优化基于外观的定位:用于边缘设备实时推理的小尺寸模型
Sensors (Basel). 2023 Jul 18;23(14):6485. doi: 10.3390/s23146485.
2
Multi-Sensor Orientation Tracking for a Façade-Cleaning Robot.多传感器面向跟踪在墙面清洗机器人中的应用
Sensors (Basel). 2020 Mar 8;20(5):1483. doi: 10.3390/s20051483.
3
RGB-D SLAM Using Point-Plane Constraints for Indoor Environments.用于室内环境的基于点平面约束的RGB-D同步定位与地图构建

本文引用的文献

1
Optimal Appearance Model for Visual Tracking.视觉跟踪的最优外观模型
PLoS One. 2016 Jan 20;11(1):e0146763. doi: 10.1371/journal.pone.0146763. eCollection 2016.
2
Position estimation and local mapping using omnidirectional images and global appearance descriptors.使用全向图像和全局外观描述符进行位置估计和局部映射。
Sensors (Basel). 2015 Oct 16;15(10):26368-95. doi: 10.3390/s151026368.
3
On-device mobile visual location recognition by using panoramic images and compressed sensing based visual descriptors.利用全景图像和基于压缩感知的视觉描述符进行设备上的移动视觉位置识别。
Sensors (Basel). 2019 Jun 17;19(12):2721. doi: 10.3390/s19122721.
4
Visual Information Fusion through Bayesian Inference for Adaptive Probability-Oriented Feature Matching.基于贝叶斯推断的自适应概率导向特征匹配的视觉信息融合。
Sensors (Basel). 2018 Jun 26;18(7):2041. doi: 10.3390/s18072041.
PLoS One. 2014 Jun 3;9(6):e98806. doi: 10.1371/journal.pone.0098806. eCollection 2014.
4
Performance of global-appearance descriptors in map building and localization using omnidirectional vision.基于全景视觉的全局外观描述符在地图构建和定位中的性能。
Sensors (Basel). 2014 Feb 14;14(2):3033-64. doi: 10.3390/s140203033.
5
Estimation of visual maps with a robot network equipped with vision sensors.利用配备视觉传感器的机器人网络来估计视觉地图。
Sensors (Basel). 2010;10(5):5209-32. doi: 10.3390/s100505209. Epub 2010 May 25.
6
Building the gist of a scene: the role of global image features in recognition.构建场景要点:全局图像特征在识别中的作用。
Prog Brain Res. 2006;155:23-36. doi: 10.1016/S0079-6123(06)55002-2.