• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

机器人自主场景探索:一种基于体素排序机制的条件随机场视图采样和评估方法,用于高效光线投射。

Autonomous Scene Exploration for Robotics: A Conditional Random View-Sampling and Evaluation Using a Voxel-Sorting Mechanism for Efficient Ray Casting.

机构信息

Department of Mechanical Engineering, University of Aveiro, 3810-193 Aveiro, Portugal.

Institute of Electronics and Informatics Engineering of Aveiro, 3810-193 Aveiro, Portugal.

出版信息

Sensors (Basel). 2020 Aug 4;20(15):4331. doi: 10.3390/s20154331.

DOI:10.3390/s20154331
PMID:32759633
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7435677/
Abstract

Carrying out the task of the exploration of a scene by an autonomous robot entails a set of complex skills, such as the ability to create and update a representation of the scene, the knowledge of the regions of the scene which are yet unexplored, the ability to estimate the most efficient point of view from the perspective of an explorer agent and, finally, the ability to physically move the system to the selected Next Best View (NBV). This paper proposes an autonomous exploration system that makes use of a dual OcTree representation to encode the regions in the scene which are occupied, free, and unknown. The NBV is estimated through a discrete approach that samples and evaluates a set of view hypotheses that are created by a conditioned random process which ensures that the views have some chance of adding novel information to the scene. The algorithm uses ray-casting defined according to the characteristics of the RGB-D sensor, and a mechanism that sorts the voxels to be tested in a way that considerably speeds up the assessment. The sampled view that is estimated to provide the largest amount of novel information is selected, and the system moves to that location, where a new exploration step begins. The exploration session is terminated when there are no more unknown regions in the scene or when those that exist cannot be observed by the system. The experimental setup consisted of a robotic manipulator with an RGB-D sensor assembled on its end-effector, all managed by a Robot Operating System (ROS) based architecture. The manipulator provides movement, while the sensor collects information about the scene. Experimental results span over three test scenarios designed to evaluate the performance of the proposed system. In particular, the exploration performance of the proposed system is compared against that of human subjects. Results show that the proposed approach is able to carry out the exploration of a scene, even when it starts from scratch, building up knowledge as the exploration progresses. Furthermore, in these experiments, the system was able to complete the exploration of the scene in less time when compared to human subjects.

摘要

自主机器人执行场景探索任务需要一系列复杂的技能,例如创建和更新场景表示的能力、了解场景中尚未探索的区域、从探索代理的角度估计最有效视点的能力,以及最终将系统物理移动到所选的最佳下一个视点 (NBV) 的能力。本文提出了一种自主探索系统,该系统利用双 OcTree 表示法来对场景中被占据、自由和未知的区域进行编码。NBV 通过离散方法进行估计,该方法对一组由条件随机过程创建的视图假设进行采样和评估,该过程确保视图有一定的机会为场景添加新信息。该算法使用根据 RGB-D 传感器特性定义的光线投射,以及一种机制对要测试的体素进行排序,这大大加快了评估速度。选择估计提供最大信息量的采样视图,并将系统移动到该位置,在该位置开始新的探索步骤。当场景中不再存在未知区域或者系统无法观察到这些区域时,探索会话结束。实验设置包括一个带有 RGB-D 传感器的机器人操纵器,该传感器安装在其末端执行器上,所有这些都由基于机器人操作系统 (ROS) 的架构管理。操纵器提供运动,而传感器收集有关场景的信息。实验结果跨越了三个测试场景,旨在评估所提出系统的性能。特别是,将提出的系统的探索性能与人类受试者的性能进行了比较。结果表明,该方法能够执行场景的探索,即使从一开始就没有任何知识,随着探索的进行,它会逐渐建立知识。此外,在这些实验中,与人类受试者相比,系统能够在更短的时间内完成场景的探索。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3003/7435677/13d9bf05be0f/sensors-20-04331-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3003/7435677/4ae6f36900ff/sensors-20-04331-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3003/7435677/6cb240268366/sensors-20-04331-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3003/7435677/700e847a5910/sensors-20-04331-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3003/7435677/a0cab88904b7/sensors-20-04331-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3003/7435677/7b4b5e225bdb/sensors-20-04331-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3003/7435677/69d13cdeaa46/sensors-20-04331-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3003/7435677/baadc3fa6180/sensors-20-04331-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3003/7435677/319632e468fd/sensors-20-04331-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3003/7435677/7736f9241d90/sensors-20-04331-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3003/7435677/f63e6bb7cd0d/sensors-20-04331-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3003/7435677/c975b3c930c6/sensors-20-04331-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3003/7435677/13d9bf05be0f/sensors-20-04331-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3003/7435677/4ae6f36900ff/sensors-20-04331-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3003/7435677/6cb240268366/sensors-20-04331-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3003/7435677/700e847a5910/sensors-20-04331-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3003/7435677/a0cab88904b7/sensors-20-04331-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3003/7435677/7b4b5e225bdb/sensors-20-04331-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3003/7435677/69d13cdeaa46/sensors-20-04331-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3003/7435677/baadc3fa6180/sensors-20-04331-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3003/7435677/319632e468fd/sensors-20-04331-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3003/7435677/7736f9241d90/sensors-20-04331-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3003/7435677/f63e6bb7cd0d/sensors-20-04331-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3003/7435677/c975b3c930c6/sensors-20-04331-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3003/7435677/13d9bf05be0f/sensors-20-04331-g017.jpg

相似文献

1
Autonomous Scene Exploration for Robotics: A Conditional Random View-Sampling and Evaluation Using a Voxel-Sorting Mechanism for Efficient Ray Casting.机器人自主场景探索:一种基于体素排序机制的条件随机场视图采样和评估方法,用于高效光线投射。
Sensors (Basel). 2020 Aug 4;20(15):4331. doi: 10.3390/s20154331.
2
Two-Dimensional Frontier-Based Viewpoint Generation for Exploring and Mapping Underwater Environments.基于二维边界的水下环境探索和测绘视点生成。
Sensors (Basel). 2019 Mar 25;19(6):1460. doi: 10.3390/s19061460.
3
Optimal Frontier-Based Autonomous Exploration in Unconstructed Environment Using RGB-D Sensor.基于最优前沿的 RGB-D 传感器在非构造环境中的自主探索
Sensors (Basel). 2020 Nov 14;20(22):6507. doi: 10.3390/s20226507.
4
Reactive and Cognitive Search Strategies for Olfactory Robots嗅觉机器人的反应式与认知式搜索策略
5
Autonomous Sequence Generation for a Neural Dynamic Robot: Scene Perception, Serial Order, and Object-Oriented Movement.神经动态机器人的自主序列生成:场景感知、序列顺序和面向对象运动。
Front Neurorobot. 2019 Nov 15;13:95. doi: 10.3389/fnbot.2019.00095. eCollection 2019.
6
Topological Frontier-Based Exploration and Map-Building Using Semantic Information.基于拓扑前沿的探索与语义信息地图构建。
Sensors (Basel). 2019 Oct 22;19(20):4595. doi: 10.3390/s19204595.
7
Robot-assisted Sistrunk's operation, total thyroidectomy, and neck dissection via a transaxillary and retroauricular (TARA) approach in papillary carcinoma arising in thyroglossal duct cyst and thyroid gland.经腋后(TARA)入路机器人辅助施行 Sistrunk 手术、甲状腺全切除术和颈淋巴结清扫术治疗甲状舌管囊肿和甲状腺起源的乳头状癌
Ann Surg Oncol. 2012 Dec;19(13):4259-61. doi: 10.1245/s10434-012-2674-y. Epub 2012 Oct 16.
8
Autonomous Exploration of Unknown Indoor Environments for High-Quality Mapping Using Feature-Based RGB-D SLAM.基于特征的 RGB-D SLAM 用于高质量建图的未知室内环境自主探索。
Sensors (Basel). 2022 Jul 7;22(14):5117. doi: 10.3390/s22145117.
9
Hierarchical framework for mobile robots to effectively and autonomously explore unknown environments.移动机器人有效自主探索未知环境的分层框架。
ISA Trans. 2023 Mar;134:1-15. doi: 10.1016/j.isatra.2022.09.005. Epub 2022 Sep 6.
10
Performance evaluation of 3D vision-based semi-autonomous control method for assistive robotic manipulator.基于3D视觉的辅助机器人操纵器半自动控制方法的性能评估
Disabil Rehabil Assist Technol. 2018 Feb;13(2):140-145. doi: 10.1080/17483107.2017.1299804. Epub 2017 Mar 22.

本文引用的文献

1
Revisiting active perception.重新审视主动感知。
Auton Robots. 2018;42(2):177-196. doi: 10.1007/s10514-017-9615-3. Epub 2017 Feb 15.
2
Automatic Waypoint Generation to Improve Robot Navigation Through Narrow Spaces.自动航点生成以改善机器人在狭窄空间中的导航
Sensors (Basel). 2019 Dec 31;20(1):240. doi: 10.3390/s20010240.
3
A Novel RGB-D SLAM Algorithm Based on Cloud Robotics.基于云机器人的新型 RGB-D SLAM 算法。
Sensors (Basel). 2019 Dec 1;19(23):5288. doi: 10.3390/s19235288.
4
Autonomous 3D Exploration of Large Structures Using an UAV Equipped with a 2D LIDAR.使用配备二维激光雷达的无人机对大型结构进行自主 3D 探索。
Sensors (Basel). 2019 Nov 8;19(22):4849. doi: 10.3390/s19224849.
5
Safe and Robust Mobile Robot Navigation in Uneven Indoor Environments.在不平坦室内环境中实现安全可靠的移动机器人导航
Sensors (Basel). 2019 Jul 7;19(13):2993. doi: 10.3390/s19132993.
6
Generalization guides human exploration in vast decision spaces.泛化指导人类在广阔的决策空间中进行探索。
Nat Hum Behav. 2018 Dec;2(12):915-924. doi: 10.1038/s41562-018-0467-4. Epub 2018 Nov 12.
7
An Integrated Approach to Goal Selection in MobileRobot Exploration.移动机器人探索中的目标选择的综合方法。
Sensors (Basel). 2019 Mar 21;19(6):1400. doi: 10.3390/s19061400.
8
Towards Efficient Implementation of an Octree for a Large 3D Point Cloud.面向大型 3D 点云的八叉树高效实现。
Sensors (Basel). 2018 Dec 12;18(12):4398. doi: 10.3390/s18124398.
9
Towards a Meaningful 3D Map Using a 3D Lidar and a Camera.使用三维激光雷达和相机生成有意义的三维地图。
Sensors (Basel). 2018 Aug 6;18(8):2571. doi: 10.3390/s18082571.