Suppr超能文献

用于机器人辅助生活中完全自主物体操纵的嵌入式框架。

An Embedded Framework for Fully Autonomous Object Manipulation in Robotic-Empowered Assisted Living.

机构信息

Department of Electrical and Information Engineering, Politecnico di Bari, 70125 Bari, Italy.

出版信息

Sensors (Basel). 2022 Dec 22;23(1):103. doi: 10.3390/s23010103.

Abstract

Most of the humanoid social robots currently diffused are designed only for verbal and animated interactions with users, and despite being equipped with two upper arms for interactive animation, they lack object manipulation capabilities. In this paper, we propose the MONOCULAR (eMbeddable autONomous ObjeCt manipULAtion Routines) framework, which implements a set of routines to add manipulation functionalities to social robots by exploiting the functional data fusion of two RGB cameras and a 3D depth sensor placed in the head frame. The framework is designed to: (i) localize specific objects to be manipulated via RGB cameras; (ii) define the characteristics of the shelf on which they are placed; and (iii) autonomously adapt approach and manipulation routines to avoid collisions and maximize grabbing accuracy. To localize the item on the shelf, MONOCULAR exploits an embeddable version of the You Only Look Once (YOLO) object detector. The RGB camera outcomes are also used to estimate the height of the shelf using an edge-detecting algorithm. Based on the item's position and the estimated shelf height, MONOCULAR is designed to select between two possible routines that dynamically optimize the approach and object manipulation parameters according to the real-time analysis of RGB and 3D sensor frames. These two routines are optimized for a central or lateral approach to objects on a shelf. The MONOCULAR procedures are designed to be fully automatic, intrinsically protecting sensitive users' data and stored home or hospital maps. MONOCULAR was optimized for Pepper by SoftBank Robotics. To characterize the proposed system, a case study in which Pepper is used as a drug delivery operator is proposed. The case study is divided into: (i) pharmaceutical package search; (ii) object approach and manipulation; and (iii) delivery operations. Experimental data showed that object manipulation routines for laterally placed objects achieves a best grabbing success rate of 96%, while the routine for centrally placed objects can reach 97% for a wide range of different shelf heights. Finally, a proof of concept is proposed here to demonstrate the applicability of the MONOCULAR framework in a real-life scenario.

摘要

目前市面上大多数人形社交机器人仅设计用于与用户进行口头和动画交互,尽管它们配备了两个用于互动动画的上手臂,但缺乏物体操作能力。在本文中,我们提出了 MONOCULAR(可嵌入的自主对象操作例程)框架,该框架通过利用放置在头部框架中的两个 RGB 相机和一个 3D 深度传感器的功能数据融合,实现了一组例程,为社交机器人添加操作功能。该框架旨在:(i)通过 RGB 相机定位要操作的特定对象;(ii)定义它们放置的架子的特征;(iii)自主调整接近和操作例程以避免碰撞并最大限度地提高抓取精度。为了定位架子上的物品,MONOCULAR 利用了可嵌入的 You Only Look Once(YOLO)目标检测器的版本。RGB 相机的输出还用于使用边缘检测算法估计架子的高度。基于物品的位置和估计的架子高度,MONOCULAR 被设计用于在两种可能的例程之间进行选择,这两种例程根据 RGB 和 3D 传感器帧的实时分析动态优化接近和物体操作参数。这两个例程分别针对架子上的中央或侧面物体进行了优化。MONOCULAR 过程旨在完全自动化,内在地保护敏感用户的数据和存储的家庭或医院地图。MONOCULAR 是针对软银机器人的 Pepper 进行了优化。为了描述所提出的系统,提出了一个案例研究,其中 Pepper 被用作药物输送操作员。该案例研究分为:(i)药品包装搜索;(ii)物体接近和操作;(iii)交付操作。实验数据表明,对于横向放置的物体,物体操作例程的最佳抓取成功率为 96%,而对于中心放置的物体,对于各种不同的架子高度,成功率可达到 97%。最后,这里提出了一个概念验证,以证明 MONOCULAR 框架在现实场景中的适用性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7720/9823472/dd400e75344c/sensors-23-00103-g006.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验