• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于关键点的多目标场景机器人抓取检测方案。

Keypoint-Based Robotic Grasp Detection Scheme in Multi-Object Scenes.

机构信息

College of Information Science and Engineering, Northeastern University, Shenyang 110819, China.

Faculty of Robot Science and Engineering, Northeastern University, Shenyang 110169, China.

出版信息

Sensors (Basel). 2021 Mar 18;21(6):2132. doi: 10.3390/s21062132.

DOI:10.3390/s21062132
PMID:33803673
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8002942/
Abstract

Robot grasping is an important direction in intelligent robots. However, how to help robots grasp specific objects in multi-object scenes is still a challenging problem. In recent years, due to the powerful feature extraction capabilities of convolutional neural networks (CNN), various algorithms based on convolutional neural networks have been proposed to solve the problem of grasp detection. Different from anchor-based grasp detection algorithms, in this paper, we propose a keypoint-based scheme to solve this problem. We model an object or a grasp as a single point-the center point of its bounding box. The detector uses keypoint estimation to find the center point and regress to all other object attributes such as size, direction, etc. Experimental results demonstrate that the accuracy of this method is 74.3% in the multi-object grasp dataset VMRD, and the performance on the single-object scene Cornell dataset is competitive with the current state-of-the-art grasp detection algorithm. Robot experiments demonstrate that this method can help robots grasp the target in single-object and multi-object scenes with overall success rates of 94% and 87%, respectively.

摘要

机器人抓取是智能机器人的一个重要方向。然而,如何帮助机器人在多目标场景中抓取特定的物体仍然是一个具有挑战性的问题。近年来,由于卷积神经网络(CNN)强大的特征提取能力,已经提出了各种基于卷积神经网络的算法来解决抓取检测问题。与基于锚点的抓取检测算法不同,在本文中,我们提出了一种基于关键点的方案来解决这个问题。我们将一个物体或抓取表示为一个单点——其边界框的中心点。检测器使用关键点估计来找到中心点,并回归所有其他物体属性,如图像大小、方向等。实验结果表明,该方法在多物体抓取数据集 VMRD 上的准确率为 74.3%,在单物体场景 Cornell 数据集上的性能与当前最先进的抓取检测算法相当。机器人实验表明,该方法可以帮助机器人在单目标和多目标场景中抓取目标,总体成功率分别为 94%和 87%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc90/8002942/eef6a3fd2407/sensors-21-02132-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc90/8002942/3bc478710b96/sensors-21-02132-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc90/8002942/a50b226a521a/sensors-21-02132-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc90/8002942/9d07a5618339/sensors-21-02132-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc90/8002942/f93f331d6ba6/sensors-21-02132-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc90/8002942/cfce2cbafac8/sensors-21-02132-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc90/8002942/bfbb8947aabc/sensors-21-02132-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc90/8002942/eef6a3fd2407/sensors-21-02132-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc90/8002942/3bc478710b96/sensors-21-02132-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc90/8002942/a50b226a521a/sensors-21-02132-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc90/8002942/9d07a5618339/sensors-21-02132-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc90/8002942/f93f331d6ba6/sensors-21-02132-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc90/8002942/cfce2cbafac8/sensors-21-02132-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc90/8002942/bfbb8947aabc/sensors-21-02132-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc90/8002942/eef6a3fd2407/sensors-21-02132-g007.jpg

相似文献

1
Keypoint-Based Robotic Grasp Detection Scheme in Multi-Object Scenes.基于关键点的多目标场景机器人抓取检测方案。
Sensors (Basel). 2021 Mar 18;21(6):2132. doi: 10.3390/s21062132.
2
Secure Grasping Detection of Objects in Stacked Scenes Based on Single-Frame RGB Images.基于单帧RGB图像的堆叠场景中物体的安全抓取检测
Sensors (Basel). 2023 Sep 24;23(19):8054. doi: 10.3390/s23198054.
3
A Practical Multi-Stage Grasp Detection Method for Kinova Robot in Stacked Environments.一种适用于Kinova机器人在堆叠环境中的实用多阶段抓取检测方法。
Micromachines (Basel). 2022 Dec 31;14(1):117. doi: 10.3390/mi14010117.
4
Robot Intelligent Grasp of Unknown Objects Based on Multi-Sensor Information.基于多传感器信息的未知物体机器人智能抓取
Sensors (Basel). 2019 Apr 2;19(7):1595. doi: 10.3390/s19071595.
5
A Real-Time Grasping Detection Network Architecture for Various Grasping Scenarios.一种适用于各种抓取场景的实时抓取检测网络架构。
IEEE Trans Neural Netw Learn Syst. 2025 May;36(5):8215-8226. doi: 10.1109/TNNLS.2024.3419180. Epub 2025 May 2.
6
Graph-Based Visual Manipulation Relationship Reasoning Network for Robotic Grasping.用于机器人抓取的基于图的视觉操作关系推理网络
Front Neurorobot. 2021 Aug 13;15:719731. doi: 10.3389/fnbot.2021.719731. eCollection 2021.
7
A neural learning approach for simultaneous object detection and grasp detection in cluttered scenes.一种用于在杂乱场景中同时进行目标检测和抓取检测的神经学习方法。
Front Comput Neurosci. 2023 Feb 20;17:1110889. doi: 10.3389/fncom.2023.1110889. eCollection 2023.
8
A two-stage grasp detection method for sequential robotic grasping in stacking scenarios.一种用于堆叠场景中机器人顺序抓取的两阶段抓取检测方法。
Math Biosci Eng. 2024 Feb 5;21(2):3448-3472. doi: 10.3934/mbe.2024152.
9
Multi-Channel Convolutional Neural Network Based 3D Object Detection for Indoor Robot Environmental Perception.基于多通道卷积神经网络的室内机器人环境感知 3D 目标检测
Sensors (Basel). 2019 Feb 21;19(4):893. doi: 10.3390/s19040893.
10
A Novel Robotic Pushing and Grasping Method Based on Vision Transformer and Convolution.一种基于视觉变换器和卷积的新型机器人推抓方法。
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):10832-10845. doi: 10.1109/TNNLS.2023.3244186. Epub 2024 Aug 5.

引用本文的文献

1
A Practical Multi-Stage Grasp Detection Method for Kinova Robot in Stacked Environments.一种适用于Kinova机器人在堆叠环境中的实用多阶段抓取检测方法。
Micromachines (Basel). 2022 Dec 31;14(1):117. doi: 10.3390/mi14010117.
2
Multi-Objective Location and Mapping Based on Deep Learning and Visual Slam.基于深度学习和视觉 slam 的多目标定位与建图
Sensors (Basel). 2022 Oct 6;22(19):7576. doi: 10.3390/s22197576.
3
Improved Multi-Stream Convolutional Block Attention Module for sEMG-Based Gesture Recognition.用于基于表面肌电信号的手势识别的改进型多流卷积块注意力模块

本文引用的文献

1
Event-Based Robotic Grasping Detection With Neuromorphic Vision Sensor and Event-Grasping Dataset.基于事件的机器人抓取检测与神经形态视觉传感器及事件抓取数据集
Front Neurorobot. 2020 Oct 8;14:51. doi: 10.3389/fnbot.2020.00051. eCollection 2020.
2
Focal Loss for Dense Object Detection.用于密集目标检测的焦散损失
IEEE Trans Pattern Anal Mach Intell. 2020 Feb;42(2):318-327. doi: 10.1109/TPAMI.2018.2858826. Epub 2018 Jul 23.
3
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.更快的 R-CNN:基于区域建议网络的实时目标检测。
Front Bioeng Biotechnol. 2022 Jun 7;10:909023. doi: 10.3389/fbioe.2022.909023. eCollection 2022.
4
Pixel-Reasoning-Based Robotics Fine Grasping for Novel Objects with Deep EDINet Structure.基于像素推理的机器人对具有深度 EDINet 结构的新物体的精细抓取。
Sensors (Basel). 2022 Jun 4;22(11):4283. doi: 10.3390/s22114283.
5
Image Sensing and Processing with Convolutional Neural Networks.卷积神经网络的图像传感与处理。
Sensors (Basel). 2022 May 10;22(10):3612. doi: 10.3390/s22103612.
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149. doi: 10.1109/TPAMI.2016.2577031. Epub 2016 Jun 6.
4
Deep learning.深度学习。
Nature. 2015 May 28;521(7553):436-44. doi: 10.1038/nature14539.