• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于可及性的抓取点检测使用图卷积网络的工业分拣应用。

Affordance-Based Grasping Point Detection Using Graph Convolutional Networks for Industrial Bin-Picking Applications.

机构信息

Department of Autonomous and Intelligent Systems, Fundación Tekniker, Iñaki Goenaga, 5-20600 Eibar, Spain.

Computer Science and Artificial Intelligence (UPV/EHU), Pº Manuel Lardizabal, 1-20018 Donostia-San Sebastián, Spain.

出版信息

Sensors (Basel). 2021 Jan 26;21(3):816. doi: 10.3390/s21030816.

DOI:10.3390/s21030816
PMID:33530409
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7865998/
Abstract

Grasping point detection has traditionally been a core robotic and computer vision problem. In recent years, deep learning based methods have been widely used to predict grasping points, and have shown strong generalization capabilities under uncertainty. Particularly, approaches that aim at predicting object affordances without relying on the object identity, have obtained promising results in random bin-picking applications. However, most of them rely on RGB/RGB-D images, and it is not clear up to what extent 3D spatial information is used. Graph Convolutional Networks (GCNs) have been successfully used for object classification and scene segmentation in point clouds, and also to predict grasping points in simple laboratory experimentation. In the present proposal, we adapted the Deep Graph Convolutional Network model with the intuition that learning from -dimensional point clouds would lead to a performance boost to predict object affordances. To the best of our knowledge, this is the first time that GCNs are applied to predict affordances for suction and gripper end effectors in an industrial bin-picking environment. Additionally, we designed a bin-picking oriented data preprocessing pipeline which contributes to ease the learning process and to create a flexible solution for any bin-picking application. To train our models, we created a highly accurate RGB-D/3D dataset which is openly available on demand. Finally, we benchmarked our method against a 2D Fully Convolutional Network based method, improving the top-1 precision score by 1.8% and 1.7% for suction and gripper respectively.

摘要

抓取点检测一直是机器人学和计算机视觉的核心问题。近年来,基于深度学习的方法已被广泛用于预测抓取点,并在不确定性下表现出强大的泛化能力。特别是,旨在不依赖物体身份预测物体可及性的方法,在随机分拣应用中取得了有希望的结果。然而,它们大多依赖于 RGB/RGB-D 图像,目前尚不清楚 3D 空间信息的使用程度。图卷积网络(GCN)已成功用于点云中的物体分类和场景分割,以及在简单的实验室实验中预测抓取点。在本提案中,我们基于这样一种直觉来适应深度图卷积网络模型:从三维点云中学习将提高预测物体可及性的性能。据我们所知,这是首次将 GCN 应用于预测工业分拣环境中吸盘和夹持器末端执行器的可及性。此外,我们设计了一种面向分拣的数据集预处理流水线,有助于简化学习过程,并为任何分拣应用提供灵活的解决方案。为了训练我们的模型,我们创建了一个高度准确的 RGB-D/3D 数据集,可根据需要公开获取。最后,我们将我们的方法与基于 2D 全卷积网络的方法进行了基准测试,分别将吸盘和夹持器的 top-1 精度提高了 1.8%和 1.7%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/25c87f5bbc6c/sensors-21-00816-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/cd84d9b86584/sensors-21-00816-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/472648a497bf/sensors-21-00816-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/9147b61c63c1/sensors-21-00816-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/be2791597ba9/sensors-21-00816-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/c42ddcc763e8/sensors-21-00816-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/f3ac68c26042/sensors-21-00816-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/d762e5298cae/sensors-21-00816-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/7ff7612b7a01/sensors-21-00816-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/4de655ef4bba/sensors-21-00816-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/7e703f2ce486/sensors-21-00816-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/35510d82c75d/sensors-21-00816-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/e48c060eacdb/sensors-21-00816-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/7c744f1ea717/sensors-21-00816-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/7ba43ffd199d/sensors-21-00816-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/ed590aa84dcb/sensors-21-00816-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/a4320dcb5a8b/sensors-21-00816-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/3f11ce1a2fca/sensors-21-00816-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/650427aedf28/sensors-21-00816-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/cfb91e604011/sensors-21-00816-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/25c87f5bbc6c/sensors-21-00816-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/cd84d9b86584/sensors-21-00816-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/472648a497bf/sensors-21-00816-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/9147b61c63c1/sensors-21-00816-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/be2791597ba9/sensors-21-00816-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/c42ddcc763e8/sensors-21-00816-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/f3ac68c26042/sensors-21-00816-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/d762e5298cae/sensors-21-00816-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/7ff7612b7a01/sensors-21-00816-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/4de655ef4bba/sensors-21-00816-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/7e703f2ce486/sensors-21-00816-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/35510d82c75d/sensors-21-00816-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/e48c060eacdb/sensors-21-00816-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/7c744f1ea717/sensors-21-00816-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/7ba43ffd199d/sensors-21-00816-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/ed590aa84dcb/sensors-21-00816-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/a4320dcb5a8b/sensors-21-00816-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/3f11ce1a2fca/sensors-21-00816-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/650427aedf28/sensors-21-00816-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/cfb91e604011/sensors-21-00816-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fe1/7865998/25c87f5bbc6c/sensors-21-00816-g020.jpg

相似文献

1
Affordance-Based Grasping Point Detection Using Graph Convolutional Networks for Industrial Bin-Picking Applications.基于可及性的抓取点检测使用图卷积网络的工业分拣应用。
Sensors (Basel). 2021 Jan 26;21(3):816. doi: 10.3390/s21030816.
2
Bin-Picking for Planar Objects Based on a Deep Learning Network: A Case Study of USB Packs.基于深度学习网络的平面物体抓取:以USB包装为例
Sensors (Basel). 2019 Aug 19;19(16):3602. doi: 10.3390/s19163602.
3
Data-Driven Object Pose Estimation in a Practical Bin-Picking Application.实用型码垛抓取应用中的数据驱动物体位姿估计。
Sensors (Basel). 2021 Sep 11;21(18):6093. doi: 10.3390/s21186093.
4
CEPB dataset: a photorealistic dataset to foster the research on bin picking in cluttered environments.CEPB数据集:一个用于促进在杂乱环境中进行抓取研究的逼真数据集。
Front Robot AI. 2024 May 16;11:1222465. doi: 10.3389/frobt.2024.1222465. eCollection 2024.
5
Failure Handling of Robotic Pick and Place Tasks With Multimodal Cues Under Partial Object Occlusion.部分物体遮挡下基于多模态线索的机器人抓取与放置任务的故障处理
Front Neurorobot. 2021 Mar 8;15:570507. doi: 10.3389/fnbot.2021.570507. eCollection 2021.
6
Depth Image-Based Deep Learning of Grasp Planning for Textureless Planar-Faced Objects in Vision-Guided Robotic Bin-Picking.基于深度图像的视觉引导机器人分拣中无纹理平面物体抓取规划的深度学习。
Sensors (Basel). 2020 Jan 28;20(3):706. doi: 10.3390/s20030706.
7
Event-Based Robotic Grasping Detection With Neuromorphic Vision Sensor and Event-Grasping Dataset.基于事件的机器人抓取检测与神经形态视觉传感器及事件抓取数据集
Front Neurorobot. 2020 Oct 8;14:51. doi: 10.3389/fnbot.2020.00051. eCollection 2020.
8
Point Pair Feature-Based Pose Estimation with Multiple Edge Appearance Models (PPF-MEAM) for Robotic Bin Picking.基于点对特征的多边缘外观模型的机器人码垛抓取姿态估计(PPF-MEAM)。
Sensors (Basel). 2018 Aug 18;18(8):2719. doi: 10.3390/s18082719.
9
A 6D Pose Estimation for Robotic Bin-Picking Using Point-Pair Features with Curvature (Cur-PPF).使用带曲率的点对特征(Cur-PPF)进行机器人分拣的 6D 位姿估计。
Sensors (Basel). 2022 Feb 24;22(5):1805. doi: 10.3390/s22051805.
10
Graph-Based Visual Manipulation Relationship Reasoning Network for Robotic Grasping.用于机器人抓取的基于图的视觉操作关系推理网络
Front Neurorobot. 2021 Aug 13;15:719731. doi: 10.3389/fnbot.2021.719731. eCollection 2021.

引用本文的文献

1
Research on Non-Pooling YOLOv5 Based Algorithm for the Recognition of Randomly Distributed Multiple Types of Parts.基于非池化 YOLOv5 的算法对任意分布的多类零件识别的研究
Sensors (Basel). 2022 Nov 30;22(23):9335. doi: 10.3390/s22239335.
2
Review of Learning-Based Robotic Manipulation in Cluttered Environments.基于学习的杂乱环境机器人操作综述。
Sensors (Basel). 2022 Oct 18;22(20):7938. doi: 10.3390/s22207938.
3
A 6D Pose Estimation for Robotic Bin-Picking Using Point-Pair Features with Curvature (Cur-PPF).使用带曲率的点对特征(Cur-PPF)进行机器人分拣的 6D 位姿估计。

本文引用的文献

1
Graph convolutional networks: a comprehensive review.图卷积网络:全面综述。
Comput Soc Netw. 2019;6(1):11. doi: 10.1186/s40649-019-0069-y. Epub 2019 Nov 10.
2
Learning ambidextrous robot grasping policies.学习双手机器人抓取策略。
Sci Robot. 2019 Jan 16;4(26). doi: 10.1126/scirobotics.aau4984.
3
A Comprehensive Survey on Graph Neural Networks.图神经网络综述。
Sensors (Basel). 2022 Feb 24;22(5):1805. doi: 10.3390/s22051805.
IEEE Trans Neural Netw Learn Syst. 2021 Jan;32(1):4-24. doi: 10.1109/TNNLS.2020.2978386. Epub 2021 Jan 4.