Suppr超能文献

基于深度强化学习的模拟人机物理交互的踝足矫形器策略设计

Policy Design for an Ankle-Foot Orthosis Using Simulated Physical Human-Robot Interaction via Deep Reinforcement Learning.

作者信息

Han Jong In, Lee Jeong-Hoon, Choi Ho Seon, Kim Jung-Hoon, Choi Jongeun

出版信息

IEEE Trans Neural Syst Rehabil Eng. 2022;30:2186-2197. doi: 10.1109/TNSRE.2022.3196468. Epub 2022 Aug 11.

Abstract

This paper presents a novel approach for designing a robotic orthosis controller considering physical human-robot interaction (pHRI). Computer simulation for this human-robot system can be advantageous in terms of time and cost due to the laborious nature of designing a robot controller that effectively assists humans with the appropriate magnitude and phase. Therefore, we propose a two-stage policy training framework based on deep reinforcement learning (deep RL) to design a robot controller using human-robot dynamic simulation. In Stage 1, the optimal policy of generating human gaits is obtained from deep RL-based imitation learning on a healthy subject model using the musculoskeletal simulation in OpenSim-RL. In Stage 2, human models in which the right soleus muscle is weakened to a certain severity are created by modifying the human model obtained from Stage 1. A robotic orthosis is then attached to the right ankle of these models. The orthosis policy that assists walking with optimal torque is then trained on these models. Here, the elastic foundation model is used to predict the pHRI in the coupling part between the human and robotic orthosis. Comparative analysis of kinematic and kinetic simulation results with the experimental data shows that the derived human musculoskeletal model imitates a human walking. It also shows that the robotic orthosis policy obtained from two-stage policy training can assist the weakened soleus muscle. The proposed approach was validated by applying the learned policy to ankle orthosis, conducting a gait experiment, and comparing it with the simulation results.

摘要

本文提出了一种考虑人机物理交互(pHRI)来设计机器人矫形器控制器的新方法。由于设计一个能以适当的大小和相位有效辅助人类的机器人控制器非常费力,因此对该人机系统进行计算机模拟在时间和成本方面具有优势。因此,我们提出了一种基于深度强化学习(深度RL)的两阶段策略训练框架,以利用人机动态模拟来设计机器人控制器。在第一阶段,使用OpenSim-RL中的肌肉骨骼模拟,通过基于深度RL的模仿学习从健康受试者模型中获得生成人类步态的最优策略。在第二阶段,通过修改从第一阶段获得的人体模型,创建右比目鱼肌减弱到一定严重程度的人体模型。然后将机器人矫形器连接到这些模型的右踝。接着在这些模型上训练以最优扭矩辅助行走的矫形器策略。在此,弹性基础模型用于预测人机矫形器耦合部分的pHRI。运动学和动力学模拟结果与实验数据的对比分析表明,所推导的人体肌肉骨骼模型模仿了人类行走。这也表明,从两阶段策略训练中获得的机器人矫形器策略可以辅助减弱的比目鱼肌。通过将学习到的策略应用于脚踝矫形器、进行步态实验并与模拟结果进行比较,验证了所提出的方法。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验