• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

DGCM-Net:用于基于增量经验的机器人抓取的密集几何对应匹配网络

DGCM-Net: Dense Geometrical Correspondence Matching Network for Incremental Experience-Based Robotic Grasping.

作者信息

Patten Timothy, Park Kiru, Vincze Markus

机构信息

Vision for Robotics Laboratory, Automation and Control Institute, TU Wien, Vienna, Austria.

出版信息

Front Robot AI. 2020 Sep 17;7:120. doi: 10.3389/frobt.2020.00120. eCollection 2020.

DOI:10.3389/frobt.2020.00120
PMID:33501286
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7805634/
Abstract

This article presents a method for grasping novel objects by learning from experience. Successful attempts are remembered and then used to guide future grasps such that more reliable grasping is achieved over time. To transfer the learned experience to unseen objects, we introduce the dense geometric correspondence matching network (DGCM-Net). This applies metric learning to encode objects with similar geometry nearby in feature space. Retrieving relevant experience for an unseen object is thus a nearest neighbor search with the encoded feature maps. DGCM-Net also reconstructs 3D-3D correspondences using the view-dependent normalized object coordinate space to transform grasp configurations from retrieved samples to unseen objects. In comparison to baseline methods, our approach achieves an equivalent grasp success rate. However, the baselines are significantly improved when fusing the knowledge from experience with their grasp proposal strategy. Offline experiments with a grasping dataset highlight the capability to transfer grasps to new instances as well as to improve success rate over time from increasing experience. Lastly, by learning task-relevant grasps, our approach can prioritize grasp configurations that enable the functional use of objects.

摘要

本文提出了一种通过从经验中学习来抓取新物体的方法。成功的尝试会被记住,然后用于指导未来的抓取,从而随着时间的推移实现更可靠的抓取。为了将学到的经验转移到未见过的物体上,我们引入了密集几何对应匹配网络(DGCM-Net)。这应用度量学习在特征空间中对附近具有相似几何形状的物体进行编码。因此,为未见过的物体检索相关经验就是对编码后的特征图进行最近邻搜索。DGCM-Net还使用视图相关的归一化物体坐标空间来重建3D-3D对应关系,以便将抓取配置从检索到的样本转换到未见过的物体上。与基线方法相比,我们的方法实现了同等的抓取成功率。然而,当将经验知识与它们的抓取提议策略融合时,基线方法有显著改进。使用抓取数据集进行的离线实验突出了将抓取转移到新实例的能力,以及随着经验增加而提高成功率的能力。最后,通过学习与任务相关的抓取,我们的方法可以对能够实现物体功能使用的抓取配置进行优先级排序。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cf12/7805634/f5016ee785df/frobt-07-00120-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cf12/7805634/3cac0d9a1dfd/frobt-07-00120-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cf12/7805634/eb6bbb71dea4/frobt-07-00120-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cf12/7805634/56bf122b2b14/frobt-07-00120-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cf12/7805634/a5bd1c1fd43f/frobt-07-00120-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cf12/7805634/7e2158224a5b/frobt-07-00120-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cf12/7805634/a12252b9d95d/frobt-07-00120-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cf12/7805634/bf757474db8d/frobt-07-00120-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cf12/7805634/aa5b0c53721f/frobt-07-00120-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cf12/7805634/2537f5990027/frobt-07-00120-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cf12/7805634/80266b859bb4/frobt-07-00120-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cf12/7805634/db61e999193e/frobt-07-00120-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cf12/7805634/f5016ee785df/frobt-07-00120-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cf12/7805634/3cac0d9a1dfd/frobt-07-00120-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cf12/7805634/eb6bbb71dea4/frobt-07-00120-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cf12/7805634/56bf122b2b14/frobt-07-00120-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cf12/7805634/a5bd1c1fd43f/frobt-07-00120-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cf12/7805634/7e2158224a5b/frobt-07-00120-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cf12/7805634/a12252b9d95d/frobt-07-00120-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cf12/7805634/bf757474db8d/frobt-07-00120-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cf12/7805634/aa5b0c53721f/frobt-07-00120-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cf12/7805634/2537f5990027/frobt-07-00120-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cf12/7805634/80266b859bb4/frobt-07-00120-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cf12/7805634/db61e999193e/frobt-07-00120-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cf12/7805634/f5016ee785df/frobt-07-00120-g0012.jpg

相似文献

1
DGCM-Net: Dense Geometrical Correspondence Matching Network for Incremental Experience-Based Robotic Grasping.DGCM-Net:用于基于增量经验的机器人抓取的密集几何对应匹配网络
Front Robot AI. 2020 Sep 17;7:120. doi: 10.3389/frobt.2020.00120. eCollection 2020.
2
GR-ConvNet v2: A Real-Time Multi-Grasp Detection Network for Robotic Grasping.GR-ConvNet v2:一种用于机器人抓取的实时多抓取检测网络。
Sensors (Basel). 2022 Aug 18;22(16):6208. doi: 10.3390/s22166208.
3
Event-Based Robotic Grasping Detection With Neuromorphic Vision Sensor and Event-Grasping Dataset.基于事件的机器人抓取检测与神经形态视觉传感器及事件抓取数据集
Front Neurorobot. 2020 Oct 8;14:51. doi: 10.3389/fnbot.2020.00051. eCollection 2020.
4
Robotic Grasping of Unknown Objects Based on Deep Learning-Based Feature Detection.基于深度学习特征检测的未知物体机器人抓取
Sensors (Basel). 2024 Jul 26;24(15):4861. doi: 10.3390/s24154861.
5
Neuromorphic Vision Based Contact-Level Classification in Robotic Grasping Applications.基于神经形态视觉的机器人抓取应用中的接触级分类。
Sensors (Basel). 2020 Aug 21;20(17):4724. doi: 10.3390/s20174724.
6
Exploiting Robot Hand Compliance and Environmental Constraints for Edge Grasps.利用机器人手部柔顺性和环境约束进行边缘抓取
Front Robot AI. 2019 Dec 19;6:135. doi: 10.3389/frobt.2019.00135. eCollection 2019.
7
Blending of brain-machine interface and vision-guided autonomous robotics improves neuroprosthetic arm performance during grasping.脑机接口与视觉引导自主机器人技术的融合可提高抓握过程中神经假肢手臂的性能。
J Neuroeng Rehabil. 2016 Mar 18;13:28. doi: 10.1186/s12984-016-0134-9.
8
Humans Can Visually Judge Grasp Quality and Refine Their Judgments Through Visual and Haptic Feedback.人类能够通过视觉和触觉反馈直观地判断抓握质量并完善他们的判断。
Front Neurosci. 2021 Jan 12;14:591898. doi: 10.3389/fnins.2020.591898. eCollection 2020.
9
Realtime Hand-Object Interaction Using Learned Grasp Space for Virtual Environments.利用学习到的抓握空间在虚拟环境中进行实时手-物体交互。
IEEE Trans Vis Comput Graph. 2019 Aug;25(8):2623-2635. doi: 10.1109/TVCG.2018.2849381. Epub 2018 Jun 21.
10
Pixel-Reasoning-Based Robotics Fine Grasping for Novel Objects with Deep EDINet Structure.基于像素推理的机器人对具有深度 EDINet 结构的新物体的精细抓取。
Sensors (Basel). 2022 Jun 4;22(11):4283. doi: 10.3390/s22114283.

引用本文的文献

1
An intelligent emulsion explosive grasping and filling system based on YOLO-SimAM-GRCNN.一种基于YOLO-SimAM-GRCNN的智能乳化炸药抓取装填系统。
Sci Rep. 2024 Nov 18;14(1):28425. doi: 10.1038/s41598-024-77034-0.
2
[Recognizing transparent objects for laboratory automation].[用于实验室自动化的透明物体识别]
Elektrotech Informationstechnik. 2023;140(6):519-529. doi: 10.1007/s00502-023-01158-w. Epub 2023 Sep 12.