• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过强化学习和多模态感知实现类人灵巧抓取

Human-like Dexterous Grasping Through Reinforcement Learning and Multimodal Perception.

作者信息

Qi Wen, Fan Haoyu, Zheng Cankun, Su Hang, Alfayad Samer

机构信息

School of Future Technology, South China University of Technology, Guangzhou 511442, China.

The IBISC Laboratory, UEVE, University of Paris-Saclay, 91000 Evry, France.

出版信息

Biomimetics (Basel). 2025 Mar 18;10(3):186. doi: 10.3390/biomimetics10030186.

DOI:10.3390/biomimetics10030186
PMID:40136840
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11940771/
Abstract

Dexterous robotic grasping with multifingered hands remains a critical challenge in non-visual environments, where diverse object geometries and material properties demand adaptive force modulation and tactile-aware manipulation. To address this, we propose the Reinforcement Learning-Based Multimodal Perception (RLMP) framework, which integrates human-like grasping intuition through operator-worn gloves with tactile-guided reinforcement learning. The framework's key innovation lies in its Tactile-Driven DCNN architecture-a lightweight convolutional network achieving 98.5% object recognition accuracy using spatiotemporal pressure patterns-coupled with an RL policy refinement mechanism that dynamically correlates finger kinematics with real-time tactile feedback. Experimental results demonstrate reliable grasping performance across deformable and rigid objects while maintaining force precision critical for fragile targets. By bridging human teleoperation with autonomous tactile adaptation, RLMP eliminates dependency on visual input and predefined object models, establishing a new paradigm for robotic dexterity in occlusion-rich scenarios.

摘要

在非视觉环境中,使用多指手进行灵巧的机器人抓取仍然是一项严峻挑战,因为各种物体的几何形状和材料特性需要自适应力调制和触觉感知操作。为了解决这一问题,我们提出了基于强化学习的多模态感知(RLMP)框架,该框架通过操作员佩戴的手套将类人抓取直觉与触觉引导的强化学习相结合。该框架的关键创新在于其触觉驱动的深度卷积神经网络(DCNN)架构——一种轻量级卷积网络,利用时空压力模式实现了98.5%的物体识别准确率——以及一种强化学习策略优化机制,该机制将手指运动学与实时触觉反馈动态关联起来。实验结果表明,该框架在可变形和刚性物体上均具有可靠的抓取性能,同时保持了对易碎目标至关重要的力精度。通过将人类远程操作与自主触觉适应相结合,RLMP消除了对视觉输入和预定义物体模型的依赖,为在遮挡丰富的场景中实现机器人灵巧性建立了一种新范式。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4daa/11940771/d15618cc7487/biomimetics-10-00186-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4daa/11940771/d3c1e3360463/biomimetics-10-00186-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4daa/11940771/ae45269792da/biomimetics-10-00186-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4daa/11940771/15654cd9513a/biomimetics-10-00186-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4daa/11940771/2f3b672629dc/biomimetics-10-00186-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4daa/11940771/c9e17c8dfa9f/biomimetics-10-00186-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4daa/11940771/a43571b8abc0/biomimetics-10-00186-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4daa/11940771/b723ddfc4f76/biomimetics-10-00186-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4daa/11940771/4a335a873839/biomimetics-10-00186-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4daa/11940771/29a4822131b1/biomimetics-10-00186-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4daa/11940771/d15618cc7487/biomimetics-10-00186-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4daa/11940771/d3c1e3360463/biomimetics-10-00186-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4daa/11940771/ae45269792da/biomimetics-10-00186-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4daa/11940771/15654cd9513a/biomimetics-10-00186-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4daa/11940771/2f3b672629dc/biomimetics-10-00186-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4daa/11940771/c9e17c8dfa9f/biomimetics-10-00186-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4daa/11940771/a43571b8abc0/biomimetics-10-00186-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4daa/11940771/b723ddfc4f76/biomimetics-10-00186-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4daa/11940771/4a335a873839/biomimetics-10-00186-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4daa/11940771/29a4822131b1/biomimetics-10-00186-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4daa/11940771/d15618cc7487/biomimetics-10-00186-g010.jpg

相似文献

1
Human-like Dexterous Grasping Through Reinforcement Learning and Multimodal Perception.通过强化学习和多模态感知实现类人灵巧抓取
Biomimetics (Basel). 2025 Mar 18;10(3):186. doi: 10.3390/biomimetics10030186.
2
A High-Repeatability Three-Dimensional Force Tactile Sensing System for Robotic Dexterous Grasping and Object Recognition.一种用于机器人灵巧抓取和物体识别的高重复性三维力触觉传感系统。
Micromachines (Basel). 2024 Dec 20;15(12):1513. doi: 10.3390/mi15121513.
3
An Accessible, Open-Source Dexterity Test: Evaluating the Grasping and Dexterous Manipulation Capabilities of Humans and Robots.一种可访问的开源灵巧性测试:评估人类和机器人的抓握及灵巧操作能力。
Front Robot AI. 2022 Apr 25;9:808154. doi: 10.3389/frobt.2022.808154. eCollection 2022.
4
ADG-Net: A Sim2Real Multimodal Learning Framework for Adaptive Dexterous Grasping.ADG-Net:一种用于自适应灵巧抓取的从模拟到现实的多模态学习框架。
IEEE Trans Cybern. 2025 Jan 3;PP. doi: 10.1109/TCYB.2024.3518975.
5
Multimodal tactile sensing fused with vision for dexterous robotic housekeeping.多模态触觉感知与视觉融合的灵巧机器人家政服务。
Nat Commun. 2024 Aug 11;15(1):6871. doi: 10.1038/s41467-024-51261-5.
6
Grasping Force Control of Multi-Fingered Robotic Hands through Tactile Sensing for Object Stabilization.通过触觉感知实现多手指机器人手的抓取力控制以稳定物体。
Sensors (Basel). 2020 Feb 14;20(4):1050. doi: 10.3390/s20041050.
7
3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands.用于机器人手部非刚性物体抓取的3D视觉数据驱动的时空变形
Sensors (Basel). 2016 May 5;16(5):640. doi: 10.3390/s16050640.
8
A Novel Hand Teleoperation Method with Force and Vibrotactile Feedback Based on Dynamic Compliant Primitives Controller.一种基于动态柔顺基元控制器的具有力和振动触觉反馈的新型手部遥操作方法。
Biomimetics (Basel). 2025 Mar 21;10(4):194. doi: 10.3390/biomimetics10040194.
9
Hierarchical Tactile-Based Control Decomposition of Dexterous In-Hand Manipulation Tasks.基于分层触觉的灵巧手中操作任务控制分解
Front Robot AI. 2020 Nov 19;7:521448. doi: 10.3389/frobt.2020.521448. eCollection 2020.
10
Object Manipulation with an Anthropomorphic Robotic Hand via Deep Reinforcement Learning with a Synergy Space of Natural Hand Poses.基于自然手位协同空间的深度强化学习的拟人机器人手操作
Sensors (Basel). 2021 Aug 5;21(16):5301. doi: 10.3390/s21165301.

引用本文的文献

1
Design of a Hierarchical Control Architecture for Fully-Driven Multi-Fingered Dexterous Hand.全驱动多指灵巧手分层控制架构设计
Biomimetics (Basel). 2025 Jun 30;10(7):422. doi: 10.3390/biomimetics10070422.
2
Bio-Signal-Guided Robot Adaptive Stiffness Learning via Human-Teleoperated Demonstrations.通过人机遥操作演示实现生物信号引导的机器人自适应刚度学习
Biomimetics (Basel). 2025 Jun 13;10(6):399. doi: 10.3390/biomimetics10060399.

本文引用的文献

1
Low-Noise Magnetic Coil System for Recording 3-Dimensional Eye Movements.用于记录三维眼动的低噪声磁线圈系统
IEEE Trans Instrum Meas. 2021;70:1-9. doi: 10.1109/tim.2020.3020682. Epub 2020 Aug 31.
2
Hierarchical Tactile-Based Control Decomposition of Dexterous In-Hand Manipulation Tasks.基于分层触觉的灵巧手中操作任务控制分解
Front Robot AI. 2020 Nov 19;7:521448. doi: 10.3389/frobt.2020.521448. eCollection 2020.
3
Grip Stabilization through Independent Finger Tactile Feedback Control.通过独立手指触觉反馈控制实现握持稳定。
Sensors (Basel). 2020 Mar 21;20(6):1748. doi: 10.3390/s20061748.
4
Grasping Force Control of Multi-Fingered Robotic Hands through Tactile Sensing for Object Stabilization.通过触觉感知实现多手指机器人手的抓取力控制以稳定物体。
Sensors (Basel). 2020 Feb 14;20(4):1050. doi: 10.3390/s20041050.
5
Optimized Assistive Human-Robot Interaction Using Reinforcement Learning.使用强化学习优化人机辅助交互。
IEEE Trans Cybern. 2016 Mar;46(3):655-67. doi: 10.1109/TCYB.2015.2412554. Epub 2015 Mar 24.
6
Human hand modelling: kinematics, dynamics, applications.人类手部建模:运动学、动力学及应用
Biol Cybern. 2012 Dec;106(11-12):741-55. doi: 10.1007/s00422-012-0532-4. Epub 2012 Nov 7.