• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于改进可变形卷积和空间特征中心机制的机器人抓取检测网络

Robotic Grasp Detection Network Based on Improved Deformable Convolution and Spatial Feature Center Mechanism.

作者信息

Zou Miao, Li Xi, Yuan Quan, Xiong Tao, Zhang Yaozong, Han Jingwei, Xiao Zhenhua

机构信息

School of Electrical and Information Engineering, Wuhan Institute of Technology, Wuhan 430205, China.

College of Information and Artificial Intelligence, Nanchang Institute of Science and Technology, Nanchang 330108, China.

出版信息

Biomimetics (Basel). 2023 Sep 1;8(5):403. doi: 10.3390/biomimetics8050403.

DOI:10.3390/biomimetics8050403
PMID:37754154
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10527218/
Abstract

In this article, we propose an effective grasp detection network based on an improved deformable convolution and spatial feature center mechanism (DCSFC-Grasp) to precisely grasp unidentified objects. DCSFC-Grasp includes three key procedures as follows. First, improved deformable convolution is introduced to adaptively adjust receptive fields for multiscale feature information extraction. Then, an efficient spatial feature center (SFC) layer is explored to capture the global remote dependencies through a lightweight multilayer perceptron (MLP) architecture. Furthermore, a learnable feature center (LFC) mechanism is reported to gather local regional features and preserve the local corner region. Finally, a lightweight CARAFE operator is developed to upsample the features. Experimental results show that DCSFC-Grasp achieves a high accuracy (99.3% and 96.1% for the Cornell and Jacquard grasp datasets, respectively) and even outperforms the existing state-of-the-art grasp detection models. The results of real-world experiments on the six-DoF Realman RM65 robotic arm further demonstrate that our DCSFC-Grasp is effective and robust for the grasping of unknown targets.

摘要

在本文中,我们提出了一种基于改进的可变形卷积和空间特征中心机制(DCSFC-Grasp)的有效抓取检测网络,以精确抓取未知物体。DCSFC-Grasp包括以下三个关键步骤。首先,引入改进的可变形卷积,以自适应调整感受野,用于多尺度特征信息提取。然后,探索了一种高效的空间特征中心(SFC)层,通过轻量级多层感知器(MLP)架构捕获全局远程依赖关系。此外,还提出了一种可学习特征中心(LFC)机制,以聚集局部区域特征并保留局部角点区域。最后,开发了一种轻量级的CARAFE算子对特征进行上采样。实验结果表明,DCSFC-Grasp具有较高的准确率(在康奈尔和雅卡尔抓取数据集上分别为99.3%和96.1%),甚至优于现有的最先进抓取检测模型。在六自由度Realman RM65机器人手臂上的实际实验结果进一步证明,我们的DCSFC-Grasp对于抓取未知目标是有效且稳健的。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b71d/10527218/dcbe7bd2e577/biomimetics-08-00403-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b71d/10527218/787a742453e7/biomimetics-08-00403-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b71d/10527218/19fe3cdc5719/biomimetics-08-00403-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b71d/10527218/e61159a669fe/biomimetics-08-00403-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b71d/10527218/d4b31517d80d/biomimetics-08-00403-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b71d/10527218/f1efe21b2c37/biomimetics-08-00403-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b71d/10527218/ca40debd5837/biomimetics-08-00403-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b71d/10527218/dcbe7bd2e577/biomimetics-08-00403-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b71d/10527218/787a742453e7/biomimetics-08-00403-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b71d/10527218/19fe3cdc5719/biomimetics-08-00403-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b71d/10527218/e61159a669fe/biomimetics-08-00403-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b71d/10527218/d4b31517d80d/biomimetics-08-00403-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b71d/10527218/f1efe21b2c37/biomimetics-08-00403-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b71d/10527218/ca40debd5837/biomimetics-08-00403-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b71d/10527218/dcbe7bd2e577/biomimetics-08-00403-g007.jpg

相似文献

1
Robotic Grasp Detection Network Based on Improved Deformable Convolution and Spatial Feature Center Mechanism.基于改进可变形卷积和空间特征中心机制的机器人抓取检测网络
Biomimetics (Basel). 2023 Sep 1;8(5):403. doi: 10.3390/biomimetics8050403.
2
GR-ConvNet v2: A Real-Time Multi-Grasp Detection Network for Robotic Grasping.GR-ConvNet v2:一种用于机器人抓取的实时多抓取检测网络。
Sensors (Basel). 2022 Aug 18;22(16):6208. doi: 10.3390/s22166208.
3
Bilateral Cross-Modal Fusion Network for Robot Grasp Detection.双边跨模态融合网络的机器人抓取检测。
Sensors (Basel). 2023 Mar 22;23(6):3340. doi: 10.3390/s23063340.
4
A Real-Time Grasping Detection Network Architecture for Various Grasping Scenarios.一种适用于各种抓取场景的实时抓取检测网络架构。
IEEE Trans Neural Netw Learn Syst. 2025 May;36(5):8215-8226. doi: 10.1109/TNNLS.2024.3419180. Epub 2025 May 2.
5
Pixel-Reasoning-Based Robotics Fine Grasping for Novel Objects with Deep EDINet Structure.基于像素推理的机器人对具有深度 EDINet 结构的新物体的精细抓取。
Sensors (Basel). 2022 Jun 4;22(11):4283. doi: 10.3390/s22114283.
6
Few-shot learning with deformable convolution for multiscale lesion detection in mammography.用于乳腺钼靶多尺度病变检测的基于可变形卷积的少样本学习
Med Phys. 2020 Jul;47(7):2970-2985. doi: 10.1002/mp.14129. Epub 2020 Mar 31.
7
Keypoint-Based Robotic Grasp Detection Scheme in Multi-Object Scenes.基于关键点的多目标场景机器人抓取检测方案。
Sensors (Basel). 2021 Mar 18;21(6):2132. doi: 10.3390/s21062132.
8
Bio-inspired affordance learning for 6-DoF robotic grasping: A transformer-based global feature encoding approach.基于生物启发的 6-DoF 机器人抓取的可触取性学习:一种基于转换器的全局特征编码方法。
Neural Netw. 2024 Mar;171:332-342. doi: 10.1016/j.neunet.2023.12.005. Epub 2023 Dec 8.
9
Centralized Feature Pyramid for Object Detection.用于目标检测的集中式特征金字塔
IEEE Trans Image Process. 2023;32:4341-4354. doi: 10.1109/TIP.2023.3297408. Epub 2023 Aug 2.
10
A neural learning approach for simultaneous object detection and grasp detection in cluttered scenes.一种用于在杂乱场景中同时进行目标检测和抓取检测的神经学习方法。
Front Comput Neurosci. 2023 Feb 20;17:1110889. doi: 10.3389/fncom.2023.1110889. eCollection 2023.

引用本文的文献

1
G-RCenterNet: Reinforced CenterNet for Robotic Arm Grasp Detection.G-RCenterNet:用于机器人手臂抓取检测的强化CenterNet
Sensors (Basel). 2024 Dec 20;24(24):8141. doi: 10.3390/s24248141.

本文引用的文献

1
Centralized Feature Pyramid for Object Detection.用于目标检测的集中式特征金字塔
IEEE Trans Image Process. 2023;32:4341-4354. doi: 10.1109/TIP.2023.3297408. Epub 2023 Aug 2.
2
Vision Permutator: A Permutable MLP-Like Architecture for Visual Recognition.视觉排列器:一种用于视觉识别的可排列的类似多层感知器的架构。
IEEE Trans Pattern Anal Mach Intell. 2023 Jan;45(1):1328-1334. doi: 10.1109/TPAMI.2022.3145427. Epub 2022 Dec 5.
3
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.更快的 R-CNN:基于区域建议网络的实时目标检测。
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149. doi: 10.1109/TPAMI.2016.2577031. Epub 2016 Jun 6.