• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于注意视觉短期记忆的机器人进化定位。

Robot evolutionary localization based on attentive visual short-term memory.

机构信息

Grupo de Robótica, Universidad Rey Juan Carlos, Fuenlabrada, Spain.

出版信息

Sensors (Basel). 2013 Jan 21;13(1):1268-99. doi: 10.3390/s130101268.

DOI:10.3390/s130101268
PMID:23337333
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC3574735/
Abstract

Cameras are one of the most relevant sensors in autonomous robots. However, two of their challenges are to extract useful information from captured images, and to manage the small field of view of regular cameras. This paper proposes implementing a dynamic visual memory to store the information gathered from a moving camera on board a robot, followed by an attention system to choose where to look with this mobile camera, and a visual localization algorithm that incorporates this visual memory. The visual memory is a collection of relevant task-oriented objects and 3D segments, and its scope is wider than the current camera field of view. The attention module takes into account the need to reobserve objects in the visual memory and the need to explore new areas. The visual memory is useful also in localization tasks, as it provides more information about robot surroundings than the current instantaneous image. This visual system is intended as underlying technology for service robot applications in real people's homes. Several experiments have been carried out, both with simulated and real Pioneer and Nao robots, to validate the system and each of its components in office scenarios.

摘要

相机是自主机器人中最重要的传感器之一。然而,它们面临的两个挑战是从捕获的图像中提取有用信息,以及管理常规相机的小视场。本文提出了一种动态视觉记忆,用于存储移动机器人上移动相机采集的信息,然后是一个注意系统,用于选择用移动相机观察的位置,以及一个视觉定位算法,该算法结合了这种视觉记忆。视觉记忆是一组相关的面向任务的对象和 3D 片段,其范围比当前相机的视场更宽。注意模块考虑了重新观察视觉记忆中对象的需求以及探索新区域的需求。视觉记忆在定位任务中也很有用,因为它提供了比当前瞬时图像更多的关于机器人周围环境的信息。该视觉系统旨在作为服务机器人在真实家庭中的应用的基础技术。已经进行了一些实验,包括使用模拟和真实的 Pioneer 和 Nao 机器人,以在办公场景中验证系统及其各个组件。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/3fbed9e0fdf9/sensors-13-01268f28.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/bfd919054534/sensors-13-01268f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/ea1699b968ca/sensors-13-01268f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/ab1212ca0d15/sensors-13-01268f3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/02aa890425e6/sensors-13-01268f4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/c32b6f6e158f/sensors-13-01268f5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/e625ecbee07a/sensors-13-01268f6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/ace6653c9bf7/sensors-13-01268f7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/44e2f5548151/sensors-13-01268f8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/d73cd5f35877/sensors-13-01268f9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/c38f1c29a598/sensors-13-01268f10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/11c71074d898/sensors-13-01268f11.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/85d6194a0622/sensors-13-01268f12.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/5db6a17fdf49/sensors-13-01268f13.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/939bb94bd285/sensors-13-01268f14.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/d399865901f2/sensors-13-01268f15.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/2ff2bec198e1/sensors-13-01268f16.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/b8b325bb7914/sensors-13-01268f17.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/51bc80edde26/sensors-13-01268f18.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/b80447a4fd6a/sensors-13-01268f19.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/bffece507f6d/sensors-13-01268f20.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/32776a0bb58d/sensors-13-01268f21.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/02cd8ad43bf0/sensors-13-01268f22.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/d772805c2ab0/sensors-13-01268f23.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/11620de87933/sensors-13-01268f24.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/e8680ac38357/sensors-13-01268f25.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/44879e580566/sensors-13-01268f26.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/b0a39f1b877e/sensors-13-01268f27.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/3fbed9e0fdf9/sensors-13-01268f28.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/bfd919054534/sensors-13-01268f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/ea1699b968ca/sensors-13-01268f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/ab1212ca0d15/sensors-13-01268f3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/02aa890425e6/sensors-13-01268f4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/c32b6f6e158f/sensors-13-01268f5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/e625ecbee07a/sensors-13-01268f6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/ace6653c9bf7/sensors-13-01268f7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/44e2f5548151/sensors-13-01268f8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/d73cd5f35877/sensors-13-01268f9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/c38f1c29a598/sensors-13-01268f10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/11c71074d898/sensors-13-01268f11.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/85d6194a0622/sensors-13-01268f12.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/5db6a17fdf49/sensors-13-01268f13.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/939bb94bd285/sensors-13-01268f14.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/d399865901f2/sensors-13-01268f15.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/2ff2bec198e1/sensors-13-01268f16.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/b8b325bb7914/sensors-13-01268f17.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/51bc80edde26/sensors-13-01268f18.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/b80447a4fd6a/sensors-13-01268f19.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/bffece507f6d/sensors-13-01268f20.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/32776a0bb58d/sensors-13-01268f21.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/02cd8ad43bf0/sensors-13-01268f22.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/d772805c2ab0/sensors-13-01268f23.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/11620de87933/sensors-13-01268f24.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/e8680ac38357/sensors-13-01268f25.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/44879e580566/sensors-13-01268f26.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/b0a39f1b877e/sensors-13-01268f27.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4312/3574735/3fbed9e0fdf9/sensors-13-01268f28.jpg

相似文献

1
Robot evolutionary localization based on attentive visual short-term memory.基于注意视觉短期记忆的机器人进化定位。
Sensors (Basel). 2013 Jan 21;13(1):1268-99. doi: 10.3390/s130101268.
2
An intelligent space for mobile robot localization using a multi-camera system.一种使用多摄像头系统进行移动机器人定位的智能空间。
Sensors (Basel). 2014 Aug 15;14(8):15039-64. doi: 10.3390/s140815039.
3
Multi-Robot 2.5D Localization and Mapping Using a Monte Carlo Algorithm on a Multi-Level Surface.基于多层面上的蒙特卡罗算法的多机器人 2.5D 定位与建图。
Sensors (Basel). 2021 Jul 4;21(13):4588. doi: 10.3390/s21134588.
4
Active vision and receptive field development in evolutionary robots.进化机器人中的主动视觉与感受野发展
Evol Comput. 2005 Winter;13(4):527-44. doi: 10.1162/106365605774666912.
5
Visual control of robots using range images.使用距离图像进行机器人的视觉控制。
Sensors (Basel). 2010;10(8):7303-22. doi: 10.3390/s100807303. Epub 2010 Aug 4.
6
RoboCoV Cleaner: An Indoor Autonomous UV-C Disinfection Robot with Advanced Dual-Safety Systems.RoboCoV 清洁器:具有先进双重安全系统的室内自主紫外线消毒机器人。
Sensors (Basel). 2024 Feb 2;24(3):0. doi: 10.3390/s24030974.
7
Human 3D Pose Estimation with a Tilting Camera for Social Mobile Robot Interaction.倾斜相机的社交移动机器人交互中的人体 3D 姿态估计
Sensors (Basel). 2019 Nov 13;19(22):4943. doi: 10.3390/s19224943.
8
3D Recognition Based on Sensor Modalities for Robotic Systems: A Survey.基于传感器模态的机器人系统 3D 识别:综述。
Sensors (Basel). 2021 Oct 27;21(21):7120. doi: 10.3390/s21217120.
9
A Brain-Robot Interaction System by Fusing Human and Machine Intelligence.融合人机智能的脑-机器人交互系统。
IEEE Trans Neural Syst Rehabil Eng. 2019 Mar;27(3):533-542. doi: 10.1109/TNSRE.2019.2897323. Epub 2019 Feb 4.
10
Real-time multiple human perception with color-depth cameras on a mobile robot.移动机器人上的彩色深度相机的实时多人感知。
IEEE Trans Cybern. 2013 Oct;43(5):1429-41. doi: 10.1109/TCYB.2013.2275291. Epub 2013 Aug 21.

引用本文的文献

1
Gaze Control of a Robotic Head for Realistic Interaction With Humans.用于与人类进行逼真交互的机器人头部的注视控制。
Front Neurorobot. 2020 Jun 17;14:34. doi: 10.3389/fnbot.2020.00034. eCollection 2020.
2
A reliability-based particle filter for humanoid robot self-localization in RoboCup Standard Platform League.基于可靠性的粒子滤波在 RoboCup 标准平台联盟中的仿人机器人自定位
Sensors (Basel). 2013 Nov 4;13(11):14954-83. doi: 10.3390/s131114954.
3
Introduction to the special issue on "New trends towards automatic vehicle control and perception systems".

本文引用的文献

1
Computational modelling of visual attention.视觉注意力的计算建模。
Nat Rev Neurosci. 2001 Mar;2(3):194-203. doi: 10.1038/35058500.
“自动车辆控制与感知系统的新趋势”特刊引言
Sensors (Basel). 2013 May 2;13(5):5712-9. doi: 10.3390/s130505712.