• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用全向图像和全局外观描述符进行位置估计和局部映射。

Position estimation and local mapping using omnidirectional images and global appearance descriptors.

作者信息

Berenguer Yerai, Payá Luis, Ballesta Mónica, Reinoso Oscar

机构信息

Departamento de Ingeniería de Sistemas y Automática, Miguel Hernández University, Avda. de la Universidad s/n, Elche (Alicante) 03202, Spain.

出版信息

Sensors (Basel). 2015 Oct 16;15(10):26368-95. doi: 10.3390/s151026368.

DOI:10.3390/s151026368
PMID:26501289
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC4634508/
Abstract

This work presents some methods to create local maps and to estimate the position of a mobile robot, using the global appearance of omnidirectional images. We use a robot that carries an omnidirectional vision system on it. Every omnidirectional image acquired by the robot is described only with one global appearance descriptor, based on the Radon transform. In the work presented in this paper, two different possibilities have been considered. In the first one, we assume the existence of a map previously built composed of omnidirectional images that have been captured from previously-known positions. The purpose in this case consists of estimating the nearest position of the map to the current position of the robot, making use of the visual information acquired by the robot from its current (unknown) position. In the second one, we assume that we have a model of the environment composed of omnidirectional images, but with no information about the location of where the images were acquired. The purpose in this case consists of building a local map and estimating the position of the robot within this map. Both methods are tested with different databases (including virtual and real images) taking into consideration the changes of the position of different objects in the environment, different lighting conditions and occlusions. The results show the effectiveness and the robustness of both methods.

摘要

这项工作提出了一些利用全向图像的全局外观来创建局部地图和估计移动机器人位置的方法。我们使用一个搭载全向视觉系统的机器人。机器人获取的每一幅全向图像仅用基于拉东变换的一个全局外观描述符来描述。在本文所呈现的工作中,考虑了两种不同的可能性。第一种情况,我们假设存在一个预先构建的由从先前已知位置捕获的全向图像组成的地图。在这种情况下,目的是利用机器人从其当前(未知)位置获取的视觉信息,估计地图中离机器人当前位置最近的位置。第二种情况,我们假设我们有一个由全向图像组成的环境模型,但没有关于图像获取位置的信息。在这种情况下,目的是构建一个局部地图并估计机器人在该地图中的位置。考虑到环境中不同物体位置的变化、不同的光照条件和遮挡情况,这两种方法都在不同的数据库(包括虚拟图像和真实图像)上进行了测试。结果表明了这两种方法的有效性和鲁棒性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd09/4634508/28fd2613c202/sensors-15-26368-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd09/4634508/345adb6323df/sensors-15-26368-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd09/4634508/33cc6342f832/sensors-15-26368-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd09/4634508/a3a0499b1a3a/sensors-15-26368-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd09/4634508/58ea1d103323/sensors-15-26368-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd09/4634508/405d6f41d7cf/sensors-15-26368-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd09/4634508/4d6aaec74bb1/sensors-15-26368-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd09/4634508/9fb468a91112/sensors-15-26368-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd09/4634508/31928f8c62df/sensors-15-26368-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd09/4634508/ab7ab128a05a/sensors-15-26368-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd09/4634508/d9121db542f9/sensors-15-26368-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd09/4634508/20ce57d1b8b1/sensors-15-26368-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd09/4634508/28fd2613c202/sensors-15-26368-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd09/4634508/345adb6323df/sensors-15-26368-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd09/4634508/33cc6342f832/sensors-15-26368-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd09/4634508/a3a0499b1a3a/sensors-15-26368-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd09/4634508/58ea1d103323/sensors-15-26368-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd09/4634508/405d6f41d7cf/sensors-15-26368-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd09/4634508/4d6aaec74bb1/sensors-15-26368-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd09/4634508/9fb468a91112/sensors-15-26368-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd09/4634508/31928f8c62df/sensors-15-26368-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd09/4634508/ab7ab128a05a/sensors-15-26368-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd09/4634508/d9121db542f9/sensors-15-26368-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd09/4634508/20ce57d1b8b1/sensors-15-26368-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd09/4634508/28fd2613c202/sensors-15-26368-g012.jpg

相似文献

1
Position estimation and local mapping using omnidirectional images and global appearance descriptors.使用全向图像和全局外观描述符进行位置估计和局部映射。
Sensors (Basel). 2015 Oct 16;15(10):26368-95. doi: 10.3390/s151026368.
2
Estimating the position and orientation of a mobile robot with respect to a trajectory using omnidirectional imaging and global appearance.使用全向成像和全局外观估计移动机器人相对于轨迹的位置和方向。
PLoS One. 2017 May 2;12(5):e0175938. doi: 10.1371/journal.pone.0175938. eCollection 2017.
3
Map building and monte carlo localization using global appearance of omnidirectional images.基于全景图像全局外观的地图构建与蒙特卡罗定位。
Sensors (Basel). 2010;10(12):11468-97. doi: 10.3390/s101211468. Epub 2010 Dec 14.
4
Performance of global-appearance descriptors in map building and localization using omnidirectional vision.基于全景视觉的全局外观描述符在地图构建和定位中的性能。
Sensors (Basel). 2014 Feb 14;14(2):3033-64. doi: 10.3390/s140203033.
5
Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.基于自适应算法的全向视觉移动机器人定位估计。
IEEE Trans Cybern. 2015 Aug;45(8):1633-46. doi: 10.1109/TCYB.2014.2357797. Epub 2014 Sep 23.
6
The Role of Global Appearance of Omnidirectional Images in Relative Distance and Orientation Retrieval.全方位图像全局外观在相对距离和方向检索中的作用。
Sensors (Basel). 2021 May 11;21(10):3327. doi: 10.3390/s21103327.
7
Appearance-Based Sequential Robot Localization Using a Patchwise Approximation of a Descriptor Manifold.基于外观的序列机器人定位:使用描述符流形的逐块逼近
Sensors (Basel). 2021 Apr 2;21(7):2483. doi: 10.3390/s21072483.
8
Improved Omnidirectional Odometry for a View-Based Mapping Approach.用于基于视图的映射方法的改进型全向里程计
Sensors (Basel). 2017 Feb 9;17(2):325. doi: 10.3390/s17020325.
9
Model-Predictive Control for Omnidirectional Mobile Robots in Logistic Environments Based on Object Detection Using CNNs.基于卷积神经网络的目标检测的物流环境下全方位移动机器人模型预测控制。
Sensors (Basel). 2023 May 23;23(11):4992. doi: 10.3390/s23114992.
10
Scale-invariant features and polar descriptors in omnidirectional imaging.全向成像中的尺度不变特征和极坐标描述符。
IEEE Trans Image Process. 2012 May;21(5):2412-23. doi: 10.1109/TIP.2012.2185937. Epub 2012 Jan 27.

引用本文的文献

1
The Role of Global Appearance of Omnidirectional Images in Relative Distance and Orientation Retrieval.全方位图像全局外观在相对距离和方向检索中的作用。
Sensors (Basel). 2021 May 11;21(10):3327. doi: 10.3390/s21103327.
2
Estimating the position and orientation of a mobile robot with respect to a trajectory using omnidirectional imaging and global appearance.使用全向成像和全局外观估计移动机器人相对于轨迹的位置和方向。
PLoS One. 2017 May 2;12(5):e0175938. doi: 10.1371/journal.pone.0175938. eCollection 2017.
3
Design and Analysis of a Single-Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs).

本文引用的文献

1
Performance of global-appearance descriptors in map building and localization using omnidirectional vision.基于全景视觉的全局外观描述符在地图构建和定位中的性能。
Sensors (Basel). 2014 Feb 14;14(2):3033-64. doi: 10.3390/s140203033.
2
Map building and monte carlo localization using global appearance of omnidirectional images.基于全景图像全局外观的地图构建与蒙特卡罗定位。
Sensors (Basel). 2010;10(12):11468-97. doi: 10.3390/s101211468. Epub 2010 Dec 14.
3
Rapid biologically-inspired scene classification using features shared with visual attention.
用于四旋翼微型飞行器(MAV)的单相机全视立体传感器的设计与分析
Sensors (Basel). 2016 Feb 6;16(2):217. doi: 10.3390/s16020217.
利用与视觉注意力共享的特征进行快速生物启发式场景分类。
IEEE Trans Pattern Anal Mach Intell. 2007 Feb;29(2):300-12. doi: 10.1109/TPAMI.2007.40.
4
Building the gist of a scene: the role of global image features in recognition.构建场景要点:全局图像特征在识别中的作用。
Prog Brain Res. 2006;155:23-36. doi: 10.1016/S0079-6123(06)55002-2.