• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

考虑抓取质量和机器人可达性的料箱抓取吸力可抓取性学习

Learning Suction Graspability Considering Grasp Quality and Robot Reachability for Bin-Picking.

作者信息

Jiang Ping, Oaki Junji, Ishihara Yoshiyuki, Ooga Junichiro, Han Haifeng, Sugahara Atsushi, Tokura Seiji, Eto Haruna, Komoda Kazuma, Ogawa Akihito

机构信息

Corporate Research & Development Center, Toshiba Corporation, Kawasaki, Japan.

出版信息

Front Neurorobot. 2022 Mar 24;16:806898. doi: 10.3389/fnbot.2022.806898. eCollection 2022.

DOI:10.3389/fnbot.2022.806898
PMID:35401137
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8987443/
Abstract

Deep learning has been widely used for inferring robust grasps. Although human-labeled RGB-D datasets were initially used to learn grasp configurations, preparation of this kind of large dataset is expensive. To address this problem, images were generated by a physical simulator, and a physically inspired model (e.g., a contact model between a suction vacuum cup and object) was used as a grasp quality evaluation metric to annotate the synthesized images. However, this kind of contact model is complicated and requires parameter identification by experiments to ensure real world performance. In addition, previous studies have not considered manipulator reachability such as when a grasp configuration with high grasp quality is unable to reach the target due to collisions or the physical limitations of the robot. In this study, we propose an intuitive geometric analytic-based grasp quality evaluation metric. We further incorporate a reachability evaluation metric. We annotate the pixel-wise grasp quality and reachability by the proposed evaluation metric on synthesized images in a simulator to train an auto-encoder-decoder called suction graspability U-Net++ (SG-U-Net++). Experiment results show that our intuitive grasp quality evaluation metric is competitive with a physically-inspired metric. Learning the reachability helps to reduce motion planning computation time by removing obviously unreachable candidates. The system achieves an overall picking speed of 560 PPH (pieces per hour).

摘要

深度学习已被广泛用于推断稳健的抓取。尽管最初使用人工标注的RGB-D数据集来学习抓取配置,但准备这种大型数据集成本高昂。为了解决这个问题,通过物理模拟器生成图像,并使用受物理启发的模型(例如,吸盘与物体之间的接触模型)作为抓取质量评估指标来标注合成图像。然而,这种接触模型很复杂,需要通过实验进行参数识别以确保在现实世界中的性能。此外,先前的研究没有考虑机械手的可达性,例如当具有高抓取质量的抓取配置由于碰撞或机器人的物理限制而无法到达目标时。在本研究中,我们提出了一种基于直观几何分析的抓取质量评估指标。我们进一步纳入了可达性评估指标。我们通过模拟器中提出的评估指标对合成图像进行逐像素的抓取质量和可达性标注,以训练一个名为吸力可抓取性U-Net++(SG-U-Net++)的自动编码器-解码器。实验结果表明,我们直观的抓取质量评估指标与受物理启发的指标具有竞争力。学习可达性有助于通过去除明显不可达的候选对象来减少运动规划计算时间。该系统实现了每小时560件的整体拾取速度。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c824/8987443/d43382527262/fnbot-16-806898-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c824/8987443/8e3157881114/fnbot-16-806898-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c824/8987443/ba08cd6538cc/fnbot-16-806898-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c824/8987443/a6e9d095adb7/fnbot-16-806898-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c824/8987443/ab101011a4b1/fnbot-16-806898-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c824/8987443/4a71d1db9b41/fnbot-16-806898-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c824/8987443/9eaf39cdbe6d/fnbot-16-806898-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c824/8987443/d43382527262/fnbot-16-806898-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c824/8987443/8e3157881114/fnbot-16-806898-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c824/8987443/ba08cd6538cc/fnbot-16-806898-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c824/8987443/a6e9d095adb7/fnbot-16-806898-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c824/8987443/ab101011a4b1/fnbot-16-806898-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c824/8987443/4a71d1db9b41/fnbot-16-806898-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c824/8987443/9eaf39cdbe6d/fnbot-16-806898-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c824/8987443/d43382527262/fnbot-16-806898-g0007.jpg

相似文献

1
Learning Suction Graspability Considering Grasp Quality and Robot Reachability for Bin-Picking.考虑抓取质量和机器人可达性的料箱抓取吸力可抓取性学习
Front Neurorobot. 2022 Mar 24;16:806898. doi: 10.3389/fnbot.2022.806898. eCollection 2022.
2
Depth Image-Based Deep Learning of Grasp Planning for Textureless Planar-Faced Objects in Vision-Guided Robotic Bin-Picking.基于深度图像的视觉引导机器人分拣中无纹理平面物体抓取规划的深度学习。
Sensors (Basel). 2020 Jan 28;20(3):706. doi: 10.3390/s20030706.
3
A neural learning approach for simultaneous object detection and grasp detection in cluttered scenes.一种用于在杂乱场景中同时进行目标检测和抓取检测的神经学习方法。
Front Comput Neurosci. 2023 Feb 20;17:1110889. doi: 10.3389/fncom.2023.1110889. eCollection 2023.
4
Learning ambidextrous robot grasping policies.学习双手机器人抓取策略。
Sci Robot. 2019 Jan 16;4(26). doi: 10.1126/scirobotics.aau4984.
5
Affordance-Based Grasping Point Detection Using Graph Convolutional Networks for Industrial Bin-Picking Applications.基于可及性的抓取点检测使用图卷积网络的工业分拣应用。
Sensors (Basel). 2021 Jan 26;21(3):816. doi: 10.3390/s21030816.
6
Robot Grasp Planning: A Learning from Demonstration-Based Approach.机器人抓取规划:一种基于示范学习的方法。
Sensors (Basel). 2024 Jan 18;24(2):618. doi: 10.3390/s24020618.
7
Deep learning can accelerate grasp-optimized motion planning.深度学习可以加速优化抓取的运动规划。
Sci Robot. 2020 Nov 18;5(48). doi: 10.1126/scirobotics.abd7710.
8
Exploiting Robot Hand Compliance and Environmental Constraints for Edge Grasps.利用机器人手部柔顺性和环境约束进行边缘抓取
Front Robot AI. 2019 Dec 19;6:135. doi: 10.3389/frobt.2019.00135. eCollection 2019.
9
DGCM-Net: Dense Geometrical Correspondence Matching Network for Incremental Experience-Based Robotic Grasping.DGCM-Net:用于基于增量经验的机器人抓取的密集几何对应匹配网络
Front Robot AI. 2020 Sep 17;7:120. doi: 10.3389/frobt.2020.00120. eCollection 2020.
10
GR-ConvNet v2: A Real-Time Multi-Grasp Detection Network for Robotic Grasping.GR-ConvNet v2:一种用于机器人抓取的实时多抓取检测网络。
Sensors (Basel). 2022 Aug 18;22(16):6208. doi: 10.3390/s22166208.

引用本文的文献

1
Learning-based robotic grasping: A review.基于学习的机器人抓取:综述
Front Robot AI. 2023 Apr 4;10:1038658. doi: 10.3389/frobt.2023.1038658. eCollection 2023.

本文引用的文献

1
Development and Grasp Stability Estimation of Sensorized Soft Robotic Hand.具有传感器的软机器人手的开发与抓握稳定性估计
Front Robot AI. 2021 Mar 31;8:619390. doi: 10.3389/frobt.2021.619390. eCollection 2021.
2
Learning ambidextrous robot grasping policies.学习双手机器人抓取策略。
Sci Robot. 2019 Jan 16;4(26). doi: 10.1126/scirobotics.aau4984.
3
UNet++: A Nested U-Net Architecture for Medical Image Segmentation.U-Net++:一种用于医学图像分割的嵌套U-Net架构。
Deep Learn Med Image Anal Multimodal Learn Clin Decis Support (2018). 2018 Sep;11045:3-11. doi: 10.1007/978-3-030-00889-5_1. Epub 2018 Sep 20.
4
SciPy 1.0: fundamental algorithms for scientific computing in Python.SciPy 1.0:Python 中的科学计算基础算法。
Nat Methods. 2020 Mar;17(3):261-272. doi: 10.1038/s41592-019-0686-2. Epub 2020 Feb 3.
5
Depth Image-Based Deep Learning of Grasp Planning for Textureless Planar-Faced Objects in Vision-Guided Robotic Bin-Picking.基于深度图像的视觉引导机器人分拣中无纹理平面物体抓取规划的深度学习。
Sensors (Basel). 2020 Jan 28;20(3):706. doi: 10.3390/s20030706.
6
Introduction: Special Issue on Enabling Robot Autonomy.引言:关于实现机器人自主性的特刊
Integr Comput Aided Eng. 2018;25. doi: 10.3233/ICA-180570.
7
Grasp quality measures: review and performance.掌握质量指标:审查与绩效。
Auton Robots. 2015;38(1):65-88. doi: 10.1007/s10514-014-9402-3. Epub 2014 Jul 31.