• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种用于在杂乱场景中同时进行目标检测和抓取检测的神经学习方法。

A neural learning approach for simultaneous object detection and grasp detection in cluttered scenes.

作者信息

Zhang Yang, Xie Lihua, Li Yuheng, Li Yuan

机构信息

China Tobacco Sichuan Industrial Co., Ltd, Chengdu, Sichuan, China.

Qinhuangdao Tobacco Machinery Co., Ltd, Qinhuangdao, Hebei, China.

出版信息

Front Comput Neurosci. 2023 Feb 20;17:1110889. doi: 10.3389/fncom.2023.1110889. eCollection 2023.

DOI:10.3389/fncom.2023.1110889
PMID:36890968
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9986287/
Abstract

Object detection and grasp detection are essential for unmanned systems working in cluttered real-world environments. Detecting grasp configurations for each object in the scene would enable reasoning manipulations. However, finding the relationships between objects and grasp configurations is still a challenging problem. To achieve this, we propose a novel neural learning approach, namely SOGD, to predict a best grasp configuration for each detected objects from an RGB-D image. The cluttered background is first filtered out via a 3D-plane-based approach. Then two separate branches are designed to detect objects and grasp candidates, respectively. The relationship between object proposals and grasp candidates are learned by an additional alignment module. A series of experiments are conducted on two public datasets (Cornell Grasp Dataset and Jacquard Dataset) and the results demonstrate the superior performance of our SOGD against SOTA methods in predicting reasonable grasp configurations "from a cluttered scene."

摘要

目标检测和抓取检测对于在杂乱的现实世界环境中工作的无人系统至关重要。检测场景中每个物体的抓取配置将有助于进行推理操作。然而,找到物体与抓取配置之间的关系仍然是一个具有挑战性的问题。为了实现这一目标,我们提出了一种新颖的神经学习方法,即SOGD,用于从RGB-D图像中为每个检测到的物体预测最佳抓取配置。首先通过基于3D平面的方法滤除杂乱的背景。然后设计两个独立的分支分别检测物体和抓取候选对象。通过一个额外的对齐模块学习物体提议和抓取候选对象之间的关系。我们在两个公共数据集(康奈尔抓取数据集和提花织物数据集)上进行了一系列实验,结果表明我们的SOGD在从“杂乱场景”中预测合理抓取配置方面优于现有最佳方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60c2/9986287/5bb76aba5016/fncom-17-1110889-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60c2/9986287/a44c2febe3a8/fncom-17-1110889-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60c2/9986287/2af3da5b5b11/fncom-17-1110889-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60c2/9986287/143ffac4ec6c/fncom-17-1110889-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60c2/9986287/98b09077adcd/fncom-17-1110889-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60c2/9986287/5bd323f21eb8/fncom-17-1110889-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60c2/9986287/e737d5570913/fncom-17-1110889-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60c2/9986287/d373785e4faf/fncom-17-1110889-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60c2/9986287/5bb76aba5016/fncom-17-1110889-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60c2/9986287/a44c2febe3a8/fncom-17-1110889-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60c2/9986287/2af3da5b5b11/fncom-17-1110889-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60c2/9986287/143ffac4ec6c/fncom-17-1110889-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60c2/9986287/98b09077adcd/fncom-17-1110889-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60c2/9986287/5bd323f21eb8/fncom-17-1110889-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60c2/9986287/e737d5570913/fncom-17-1110889-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60c2/9986287/d373785e4faf/fncom-17-1110889-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60c2/9986287/5bb76aba5016/fncom-17-1110889-g0008.jpg

相似文献

1
A neural learning approach for simultaneous object detection and grasp detection in cluttered scenes.一种用于在杂乱场景中同时进行目标检测和抓取检测的神经学习方法。
Front Comput Neurosci. 2023 Feb 20;17:1110889. doi: 10.3389/fncom.2023.1110889. eCollection 2023.
2
Pixel-Reasoning-Based Robotics Fine Grasping for Novel Objects with Deep EDINet Structure.基于像素推理的机器人对具有深度 EDINet 结构的新物体的精细抓取。
Sensors (Basel). 2022 Jun 4;22(11):4283. doi: 10.3390/s22114283.
3
Keypoint-Based Robotic Grasp Detection Scheme in Multi-Object Scenes.基于关键点的多目标场景机器人抓取检测方案。
Sensors (Basel). 2021 Mar 18;21(6):2132. doi: 10.3390/s21062132.
4
A Real-Time Grasping Detection Network Architecture for Various Grasping Scenarios.一种适用于各种抓取场景的实时抓取检测网络架构。
IEEE Trans Neural Netw Learn Syst. 2025 May;36(5):8215-8226. doi: 10.1109/TNNLS.2024.3419180. Epub 2025 May 2.
5
Attention Based Visual Analysis for Fast Grasp Planning With a Multi-Fingered Robotic Hand.基于注意力的多指机器人手快速抓取规划视觉分析
Front Neurorobot. 2019 Jul 31;13:60. doi: 10.3389/fnbot.2019.00060. eCollection 2019.
6
Category-Level 6-D Object Pose Estimation With Shape Deformation for Robotic Grasp Detection.用于机器人抓取检测的基于形状变形的6-D类别级物体位姿估计
IEEE Trans Neural Netw Learn Syst. 2025 Jan;36(1):1857-1871. doi: 10.1109/TNNLS.2023.3330011. Epub 2025 Jan 7.
7
GR-ConvNet v2: A Real-Time Multi-Grasp Detection Network for Robotic Grasping.GR-ConvNet v2:一种用于机器人抓取的实时多抓取检测网络。
Sensors (Basel). 2022 Aug 18;22(16):6208. doi: 10.3390/s22166208.
8
Graph-Based Visual Manipulation Relationship Reasoning Network for Robotic Grasping.用于机器人抓取的基于图的视觉操作关系推理网络
Front Neurorobot. 2021 Aug 13;15:719731. doi: 10.3389/fnbot.2021.719731. eCollection 2021.
9
A Novel Robotic Pushing and Grasping Method Based on Vision Transformer and Convolution.一种基于视觉变换器和卷积的新型机器人推抓方法。
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):10832-10845. doi: 10.1109/TNNLS.2023.3244186. Epub 2024 Aug 5.
10
Secure Grasping Detection of Objects in Stacked Scenes Based on Single-Frame RGB Images.基于单帧RGB图像的堆叠场景中物体的安全抓取检测
Sensors (Basel). 2023 Sep 24;23(19):8054. doi: 10.3390/s23198054.

引用本文的文献

1
Efficient push-grasping for multiple target objects in clutter environments.在杂乱环境中对多个目标物体进行高效推抓。
Front Neurorobot. 2023 May 12;17:1188468. doi: 10.3389/fnbot.2023.1188468. eCollection 2023.

本文引用的文献

1
Enhanced framework for COVID-19 prediction with computed tomography scan images using dense convolutional neural network and novel loss function.使用密集卷积神经网络和新型损失函数,基于计算机断层扫描图像的新冠肺炎预测增强框架。
Comput Electr Eng. 2023 Jan;105:108479. doi: 10.1016/j.compeleceng.2022.108479. Epub 2022 Nov 14.
2
Invariance of object detection in untrained deep neural networks.未训练的深度神经网络中目标检测的不变性
Front Comput Neurosci. 2022 Nov 3;16:1030707. doi: 10.3389/fncom.2022.1030707. eCollection 2022.
3
A computational classification method of breast cancer images using the VGGNet model.
一种使用VGGNet模型的乳腺癌图像计算分类方法。
Front Comput Neurosci. 2022 Nov 4;16:1001803. doi: 10.3389/fncom.2022.1001803. eCollection 2022.
4
An Infrared Sequence Image Generating Method for Target Detection and Tracking.一种用于目标检测与跟踪的红外序列图像生成方法。
Front Comput Neurosci. 2022 Jul 15;16:930827. doi: 10.3389/fncom.2022.930827. eCollection 2022.
5
Focal Loss for Dense Object Detection.用于密集目标检测的焦散损失
IEEE Trans Pattern Anal Mach Intell. 2020 Feb;42(2):318-327. doi: 10.1109/TPAMI.2018.2858826. Epub 2018 Jul 23.