Zou Miao, Li Xi, Yuan Quan, Xiong Tao, Zhang Yaozong, Han Jingwei, Xiao Zhenhua
School of Electrical and Information Engineering, Wuhan Institute of Technology, Wuhan 430205, China.
College of Information and Artificial Intelligence, Nanchang Institute of Science and Technology, Nanchang 330108, China.
Biomimetics (Basel). 2023 Sep 1;8(5):403. doi: 10.3390/biomimetics8050403.
In this article, we propose an effective grasp detection network based on an improved deformable convolution and spatial feature center mechanism (DCSFC-Grasp) to precisely grasp unidentified objects. DCSFC-Grasp includes three key procedures as follows. First, improved deformable convolution is introduced to adaptively adjust receptive fields for multiscale feature information extraction. Then, an efficient spatial feature center (SFC) layer is explored to capture the global remote dependencies through a lightweight multilayer perceptron (MLP) architecture. Furthermore, a learnable feature center (LFC) mechanism is reported to gather local regional features and preserve the local corner region. Finally, a lightweight CARAFE operator is developed to upsample the features. Experimental results show that DCSFC-Grasp achieves a high accuracy (99.3% and 96.1% for the Cornell and Jacquard grasp datasets, respectively) and even outperforms the existing state-of-the-art grasp detection models. The results of real-world experiments on the six-DoF Realman RM65 robotic arm further demonstrate that our DCSFC-Grasp is effective and robust for the grasping of unknown targets.
在本文中,我们提出了一种基于改进的可变形卷积和空间特征中心机制(DCSFC-Grasp)的有效抓取检测网络,以精确抓取未知物体。DCSFC-Grasp包括以下三个关键步骤。首先,引入改进的可变形卷积,以自适应调整感受野,用于多尺度特征信息提取。然后,探索了一种高效的空间特征中心(SFC)层,通过轻量级多层感知器(MLP)架构捕获全局远程依赖关系。此外,还提出了一种可学习特征中心(LFC)机制,以聚集局部区域特征并保留局部角点区域。最后,开发了一种轻量级的CARAFE算子对特征进行上采样。实验结果表明,DCSFC-Grasp具有较高的准确率(在康奈尔和雅卡尔抓取数据集上分别为99.3%和96.1%),甚至优于现有的最先进抓取检测模型。在六自由度Realman RM65机器人手臂上的实际实验结果进一步证明,我们的DCSFC-Grasp对于抓取未知目标是有效且稳健的。