Qi Wen, Fan Haoyu, Zheng Cankun, Su Hang, Alfayad Samer
School of Future Technology, South China University of Technology, Guangzhou 511442, China.
The IBISC Laboratory, UEVE, University of Paris-Saclay, 91000 Evry, France.
Biomimetics (Basel). 2025 Mar 18;10(3):186. doi: 10.3390/biomimetics10030186.
Dexterous robotic grasping with multifingered hands remains a critical challenge in non-visual environments, where diverse object geometries and material properties demand adaptive force modulation and tactile-aware manipulation. To address this, we propose the Reinforcement Learning-Based Multimodal Perception (RLMP) framework, which integrates human-like grasping intuition through operator-worn gloves with tactile-guided reinforcement learning. The framework's key innovation lies in its Tactile-Driven DCNN architecture-a lightweight convolutional network achieving 98.5% object recognition accuracy using spatiotemporal pressure patterns-coupled with an RL policy refinement mechanism that dynamically correlates finger kinematics with real-time tactile feedback. Experimental results demonstrate reliable grasping performance across deformable and rigid objects while maintaining force precision critical for fragile targets. By bridging human teleoperation with autonomous tactile adaptation, RLMP eliminates dependency on visual input and predefined object models, establishing a new paradigm for robotic dexterity in occlusion-rich scenarios.
在非视觉环境中,使用多指手进行灵巧的机器人抓取仍然是一项严峻挑战,因为各种物体的几何形状和材料特性需要自适应力调制和触觉感知操作。为了解决这一问题,我们提出了基于强化学习的多模态感知(RLMP)框架,该框架通过操作员佩戴的手套将类人抓取直觉与触觉引导的强化学习相结合。该框架的关键创新在于其触觉驱动的深度卷积神经网络(DCNN)架构——一种轻量级卷积网络,利用时空压力模式实现了98.5%的物体识别准确率——以及一种强化学习策略优化机制,该机制将手指运动学与实时触觉反馈动态关联起来。实验结果表明,该框架在可变形和刚性物体上均具有可靠的抓取性能,同时保持了对易碎目标至关重要的力精度。通过将人类远程操作与自主触觉适应相结合,RLMP消除了对视觉输入和预定义物体模型的依赖,为在遮挡丰富的场景中实现机器人灵巧性建立了一种新范式。