Suppr超能文献

用于机器人抓取的基于图的视觉操作关系推理网络

Graph-Based Visual Manipulation Relationship Reasoning Network for Robotic Grasping.

作者信息

Zuo Guoyu, Tong Jiayuan, Liu Hongxing, Chen Wenbai, Li Jianfeng

机构信息

Faculty of Information Technology, Beijing University of Technology, Beijing, China.

Beijing Key Laboratory of Computing Intelligence and Intelligent Systems, Beijing, China.

出版信息

Front Neurorobot. 2021 Aug 13;15:719731. doi: 10.3389/fnbot.2021.719731. eCollection 2021.

Abstract

To grasp the target object stably and orderly in the object-stacking scenes, it is important for the robot to reason the relationships between objects and obtain intelligent manipulation order for more advanced interaction between the robot and the environment. This paper proposes a novel graph-based visual manipulation relationship reasoning network (GVMRN) that directly outputs object relationships and manipulation order. The GVMRN model first extracts features and detects objects from RGB images, and then adopts graph convolutional network (GCN) to collect contextual information between objects. To improve the efficiency of relation reasoning, a relationship filtering network is built to reduce object pairs before reasoning. The experiments on the Visual Manipulation Relationship Dataset (VMRD) show that our model significantly outperforms previous methods on reasoning object relationships in object-stacking scenes. The GVMRN model is also tested on the images we collected and applied on the robot grasping platform. The results demonstrated the generalization and applicability of our method in real environment.

摘要

为了在物体堆叠场景中稳定有序地抓取目标物体,机器人推理物体之间的关系并获得智能操作顺序对于实现机器人与环境之间更高级的交互至关重要。本文提出了一种新颖的基于图的视觉操纵关系推理网络(GVMRN),该网络直接输出物体关系和操作顺序。GVMRN模型首先从RGB图像中提取特征并检测物体,然后采用图卷积网络(GCN)来收集物体之间的上下文信息。为了提高关系推理的效率,构建了一个关系过滤网络以在推理前减少物体对。在视觉操纵关系数据集(VMRD)上的实验表明,我们的模型在推理物体堆叠场景中的物体关系方面显著优于先前的方法。GVMRN模型还在我们收集的图像上进行了测试,并应用于机器人抓取平台。结果证明了我们的方法在实际环境中的通用性和适用性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c874/8414995/71a887eb6d39/fnbot-15-719731-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验