• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于机器人抓取的基于图的视觉操作关系推理网络

Graph-Based Visual Manipulation Relationship Reasoning Network for Robotic Grasping.

作者信息

Zuo Guoyu, Tong Jiayuan, Liu Hongxing, Chen Wenbai, Li Jianfeng

机构信息

Faculty of Information Technology, Beijing University of Technology, Beijing, China.

Beijing Key Laboratory of Computing Intelligence and Intelligent Systems, Beijing, China.

出版信息

Front Neurorobot. 2021 Aug 13;15:719731. doi: 10.3389/fnbot.2021.719731. eCollection 2021.

DOI:10.3389/fnbot.2021.719731
PMID:34483872
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8414995/
Abstract

To grasp the target object stably and orderly in the object-stacking scenes, it is important for the robot to reason the relationships between objects and obtain intelligent manipulation order for more advanced interaction between the robot and the environment. This paper proposes a novel graph-based visual manipulation relationship reasoning network (GVMRN) that directly outputs object relationships and manipulation order. The GVMRN model first extracts features and detects objects from RGB images, and then adopts graph convolutional network (GCN) to collect contextual information between objects. To improve the efficiency of relation reasoning, a relationship filtering network is built to reduce object pairs before reasoning. The experiments on the Visual Manipulation Relationship Dataset (VMRD) show that our model significantly outperforms previous methods on reasoning object relationships in object-stacking scenes. The GVMRN model is also tested on the images we collected and applied on the robot grasping platform. The results demonstrated the generalization and applicability of our method in real environment.

摘要

为了在物体堆叠场景中稳定有序地抓取目标物体,机器人推理物体之间的关系并获得智能操作顺序对于实现机器人与环境之间更高级的交互至关重要。本文提出了一种新颖的基于图的视觉操纵关系推理网络(GVMRN),该网络直接输出物体关系和操作顺序。GVMRN模型首先从RGB图像中提取特征并检测物体,然后采用图卷积网络(GCN)来收集物体之间的上下文信息。为了提高关系推理的效率,构建了一个关系过滤网络以在推理前减少物体对。在视觉操纵关系数据集(VMRD)上的实验表明,我们的模型在推理物体堆叠场景中的物体关系方面显著优于先前的方法。GVMRN模型还在我们收集的图像上进行了测试,并应用于机器人抓取平台。结果证明了我们的方法在实际环境中的通用性和适用性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c874/8414995/8294d829f4a4/fnbot-15-719731-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c874/8414995/71a887eb6d39/fnbot-15-719731-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c874/8414995/aae7a19e75b2/fnbot-15-719731-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c874/8414995/2a15cc413ef8/fnbot-15-719731-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c874/8414995/3ae4275478ed/fnbot-15-719731-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c874/8414995/88fad979e479/fnbot-15-719731-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c874/8414995/49c3d5b3a207/fnbot-15-719731-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c874/8414995/535ef1aa39d3/fnbot-15-719731-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c874/8414995/8294d829f4a4/fnbot-15-719731-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c874/8414995/71a887eb6d39/fnbot-15-719731-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c874/8414995/aae7a19e75b2/fnbot-15-719731-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c874/8414995/2a15cc413ef8/fnbot-15-719731-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c874/8414995/3ae4275478ed/fnbot-15-719731-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c874/8414995/88fad979e479/fnbot-15-719731-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c874/8414995/49c3d5b3a207/fnbot-15-719731-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c874/8414995/535ef1aa39d3/fnbot-15-719731-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c874/8414995/8294d829f4a4/fnbot-15-719731-g0008.jpg

相似文献

1
Graph-Based Visual Manipulation Relationship Reasoning Network for Robotic Grasping.用于机器人抓取的基于图的视觉操作关系推理网络
Front Neurorobot. 2021 Aug 13;15:719731. doi: 10.3389/fnbot.2021.719731. eCollection 2021.
2
Secure Grasping Detection of Objects in Stacked Scenes Based on Single-Frame RGB Images.基于单帧RGB图像的堆叠场景中物体的安全抓取检测
Sensors (Basel). 2023 Sep 24;23(19):8054. doi: 10.3390/s23198054.
3
Keypoint-Based Robotic Grasp Detection Scheme in Multi-Object Scenes.基于关键点的多目标场景机器人抓取检测方案。
Sensors (Basel). 2021 Mar 18;21(6):2132. doi: 10.3390/s21062132.
4
SOLVER: Scene-Object Interrelated Visual Emotion Reasoning Network.求解器:场景-对象关联视觉情感推理网络。
IEEE Trans Image Process. 2021;30:8686-8701. doi: 10.1109/TIP.2021.3118983. Epub 2021 Oct 22.
5
A two-stage grasp detection method for sequential robotic grasping in stacking scenarios.一种用于堆叠场景中机器人顺序抓取的两阶段抓取检测方法。
Math Biosci Eng. 2024 Feb 5;21(2):3448-3472. doi: 10.3934/mbe.2024152.
6
A pushing-grasping collaborative method based on deep Q-network algorithm in dual viewpoints.基于双视角深度Q网络算法的推握协作方法
Sci Rep. 2022 Mar 10;12(1):3927. doi: 10.1038/s41598-022-07900-2.
7
A Novel Robotic Pushing and Grasping Method Based on Vision Transformer and Convolution.一种基于视觉变换器和卷积的新型机器人推抓方法。
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):10832-10845. doi: 10.1109/TNNLS.2023.3244186. Epub 2024 Aug 5.
8
A neural learning approach for simultaneous object detection and grasp detection in cluttered scenes.一种用于在杂乱场景中同时进行目标检测和抓取检测的神经学习方法。
Front Comput Neurosci. 2023 Feb 20;17:1110889. doi: 10.3389/fncom.2023.1110889. eCollection 2023.
9
3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands.用于机器人手部非刚性物体抓取的3D视觉数据驱动的时空变形
Sensors (Basel). 2016 May 5;16(5):640. doi: 10.3390/s16050640.
10
Pixel-Reasoning-Based Robotics Fine Grasping for Novel Objects with Deep EDINet Structure.基于像素推理的机器人对具有深度 EDINet 结构的新物体的精细抓取。
Sensors (Basel). 2022 Jun 4;22(11):4283. doi: 10.3390/s22114283.

引用本文的文献

1
Visual Sorting of Express Packages Based on the Multi-Dimensional Fusion Method under Complex Logistics Sorting.复杂物流分拣下基于多维融合方法的快递包裹视觉分拣
Entropy (Basel). 2023 Feb 5;25(2):298. doi: 10.3390/e25020298.

本文引用的文献

1
Visual Sorting of Express Parcels Based on Multi-Task Deep Learning.基于多任务深度学习的快递包裹可视化分拣。
Sensors (Basel). 2020 Nov 27;20(23):6785. doi: 10.3390/s20236785.
2
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.更快的 R-CNN:基于区域建议网络的实时目标检测。
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149. doi: 10.1109/TPAMI.2016.2577031. Epub 2016 Jun 6.