• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于双视角深度Q网络算法的推握协作方法

A pushing-grasping collaborative method based on deep Q-network algorithm in dual viewpoints.

作者信息

Peng Gang, Liao Jinhu, Guan Shangbin, Yang Jin, Li Xinde

机构信息

School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, 430074, China.

Key Laboratory of Image Processing and Intelligent Control, Ministry of Education, Huazhong University of Science and Technology, Wuhan, 430074, China.

出版信息

Sci Rep. 2022 Mar 10;12(1):3927. doi: 10.1038/s41598-022-07900-2.

DOI:10.1038/s41598-022-07900-2
PMID:35273281
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8913751/
Abstract

In the field of intelligent manufacturing, robot grasping and sorting is important content. However, there are some disadvantages in the traditional single-view-based manipulator grasping methods by using a 2D camera, where the efficiency and the accuracy of grasping are both low when facing the scene of stacking and occlusion for the reason that there is information missing by single-view 2D camera-based methods while acquiring scene information, and the methods of grasping only can't change the difficult-to-grasp scene which is stack and occluded. Regarding the issue above, a pushing-grasping collaborative method based on the deep Q-network in dual viewpoints is proposed in this paper. This method in this paper adopts an improved deep Q-network algorithm, with an RGB-D camera to obtain the information of objects' RGB images and point clouds from two viewpoints, which solved the problem of lack of information missing. What's more, it combines the pushing and grasping actions with the deep Q-network, which make it have the ability of active exploration, so that the trained manipulator can make the scenes less stacking and occlusion, and with the help of that, it can perform well in more complicated grasping scenes. In addition, we improved the reward function of the deep Q-network and propose the piecewise reward function to speed up the convergence of the deep Q-network. We trained different models and tried different methods in the V-REP simulation environment, and it drew a conclusion that the method proposed in this paper converges quickly and the success rate of grasping objects in unstructured scenes raises up to 83.5%. Besides, it shows the generalization ability and well performance when novel objects appear in the scenes that the manipulator has never grasped before.

摘要

在智能制造领域,机器人抓取与分拣是重要内容。然而,传统基于单视角的二维相机机械手抓取方法存在一些缺点,即面对堆叠和遮挡场景时,抓取效率和准确性都很低,原因是基于单视角二维相机的方法在获取场景信息时存在信息缺失,且仅有的抓取方法无法改变堆叠和遮挡这种难以抓取的场景。针对上述问题,本文提出了一种基于双视角深度Q网络的推抓协同方法。本文的方法采用改进的深度Q网络算法,利用RGB-D相机从两个视角获取物体的RGB图像和点云信息,解决了信息缺失问题。此外,它将推和抓的动作与深度Q网络相结合,使其具有主动探索能力,从而使训练后的机械手能够减少场景中的堆叠和遮挡情况,借助于此,它能在更复杂的抓取场景中表现良好。另外,我们改进了深度Q网络的奖励函数,提出了分段奖励函数以加速深度Q网络的收敛。我们在V-REP仿真环境中训练了不同模型并尝试了不同方法,得出本文提出的方法收敛速度快,在非结构化场景中抓取物体的成功率提高到83.5%的结论。此外,当新物体出现在机械手之前从未抓取过的场景中时,它展示了泛化能力和良好性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51b6/8913751/48b1409d7a5b/41598_2022_7900_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51b6/8913751/8b8d4dba27c5/41598_2022_7900_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51b6/8913751/b9acdb14609c/41598_2022_7900_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51b6/8913751/9c4680dd5f24/41598_2022_7900_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51b6/8913751/8a85f859b419/41598_2022_7900_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51b6/8913751/5ac4db1967c8/41598_2022_7900_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51b6/8913751/48b1409d7a5b/41598_2022_7900_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51b6/8913751/8b8d4dba27c5/41598_2022_7900_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51b6/8913751/b9acdb14609c/41598_2022_7900_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51b6/8913751/9c4680dd5f24/41598_2022_7900_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51b6/8913751/8a85f859b419/41598_2022_7900_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51b6/8913751/5ac4db1967c8/41598_2022_7900_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51b6/8913751/48b1409d7a5b/41598_2022_7900_Fig6_HTML.jpg

相似文献

1
A pushing-grasping collaborative method based on deep Q-network algorithm in dual viewpoints.基于双视角深度Q网络算法的推握协作方法
Sci Rep. 2022 Mar 10;12(1):3927. doi: 10.1038/s41598-022-07900-2.
2
A Novel Robotic Pushing and Grasping Method Based on Vision Transformer and Convolution.一种基于视觉变换器和卷积的新型机器人推抓方法。
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):10832-10845. doi: 10.1109/TNNLS.2023.3244186. Epub 2024 Aug 5.
3
Graph-Based Visual Manipulation Relationship Reasoning Network for Robotic Grasping.用于机器人抓取的基于图的视觉操作关系推理网络
Front Neurorobot. 2021 Aug 13;15:719731. doi: 10.3389/fnbot.2021.719731. eCollection 2021.
4
Efficient push-grasping for multiple target objects in clutter environments.在杂乱环境中对多个目标物体进行高效推抓。
Front Neurorobot. 2023 May 12;17:1188468. doi: 10.3389/fnbot.2023.1188468. eCollection 2023.
5
Secure Grasping Detection of Objects in Stacked Scenes Based on Single-Frame RGB Images.基于单帧RGB图像的堆叠场景中物体的安全抓取检测
Sensors (Basel). 2023 Sep 24;23(19):8054. doi: 10.3390/s23198054.
6
Research on Intelligent Robot Point Cloud Grasping in Internet of Things.物联网中智能机器人点云抓取研究
Micromachines (Basel). 2022 Nov 17;13(11):1999. doi: 10.3390/mi13111999.
7
Robot grasping method optimization using improved deep deterministic policy gradient algorithm of deep reinforcement learning.基于深度强化学习的改进深度确定性策略梯度算法的机器人抓取方法优化
Rev Sci Instrum. 2021 Feb 1;92(2):025114. doi: 10.1063/5.0034101.
8
Grasping detection of dual manipulators based on Markov decision process with neural network.基于神经网络的马尔可夫决策过程的双臂机器人抓取检测。
Neural Netw. 2024 Jan;169:778-792. doi: 10.1016/j.neunet.2023.09.016. Epub 2023 Sep 14.
9
Pixel-Reasoning-Based Robotics Fine Grasping for Novel Objects with Deep EDINet Structure.基于像素推理的机器人对具有深度 EDINet 结构的新物体的精细抓取。
Sensors (Basel). 2022 Jun 4;22(11):4283. doi: 10.3390/s22114283.
10
An Efficient Robotic Pushing and Grasping Method in Cluttered Scene.
IEEE Trans Cybern. 2024 Sep;54(9):4889-4902. doi: 10.1109/TCYB.2024.3381639. Epub 2024 Aug 26.

引用本文的文献

1
An intelligent emulsion explosive grasping and filling system based on YOLO-SimAM-GRCNN.一种基于YOLO-SimAM-GRCNN的智能乳化炸药抓取装填系统。
Sci Rep. 2024 Nov 18;14(1):28425. doi: 10.1038/s41598-024-77034-0.
2
Towards Multi-Objective Object Push-Grasp Policy Based on Maximum Entropy Deep Reinforcement Learning under Sparse Rewards.基于稀疏奖励下最大熵深度强化学习的多目标物体推抓策略研究
Entropy (Basel). 2024 May 12;26(5):416. doi: 10.3390/e26050416.
3
Review of Learning-Based Robotic Manipulation in Cluttered Environments.

本文引用的文献

1
Fully Convolutional Networks for Semantic Segmentation.全卷积网络用于语义分割。
IEEE Trans Pattern Anal Mach Intell. 2017 Apr;39(4):640-651. doi: 10.1109/TPAMI.2016.2572683. Epub 2016 May 24.
2
Human-level control through deep reinforcement learning.通过深度强化学习实现人类水平的控制。
Nature. 2015 Feb 26;518(7540):529-33. doi: 10.1038/nature14236.
基于学习的杂乱环境机器人操作综述。
Sensors (Basel). 2022 Oct 18;22(20):7938. doi: 10.3390/s22207938.