Toyota Central R&D Labs., Aichi 480-1192 Japan
Neural Comput. 2021 Jan;33(1):129-156. doi: 10.1162/neco_a_01333. Epub 2020 Oct 20.
This letter proposes a new idea to improve learning efficiency in reinforcement learning (RL) with the actor-critic method used as a muscle controller for posture stabilization of the human arm. Actor-critic RL (ACRL) is used for simulations to realize posture controls in humans or robots using muscle tension control. However, it requires very high computational costs to acquire a better muscle control policy for desirable postures. For efficient ACRL, we focused on embodiment that is supposed to potentially achieve efficient controls in research fields of artificial intelligence or robotics. According to the neurophysiology of motion control obtained from experimental studies using animals or humans, the pedunculopontine tegmental nucleus (PPTn) induces muscle tone suppression, and the midbrain locomotor region (MLR) induces muscle tone promotion. PPTn and MLR modulate the activation levels of mutually antagonizing muscles such as flexors and extensors in a process through which control signals are translated from the substantia nigra reticulata to the brain stem. Therefore, we hypothesized that the PPTn and MLR could control muscle tone, that is, the maximum values of activation levels of mutually antagonizing muscles using different sigmoidal functions for each muscle; then we introduced antagonism function models (AFMs) of PPTn and MLR for individual muscles, incorporating the hypothesis into the process to determine the activation level of each muscle based on the output of the actor in ACRL. ACRL with AFMs representing the embodiment of muscle tone successfully achieved posture stabilization in five joint motions of the right arm of a human adult male under gravity in predetermined target angles at an earlier period of learning than the learning methods without AFMs. The results obtained from this study suggest that the introduction of embodiment of muscle tone can enhance learning efficiency in posture stabilization disorders of humans or humanoid robots.
这封信提出了一个新的想法,即在强化学习(RL)中使用动作-评价器方法,将其作为人类手臂姿势稳定的肌肉控制器,以提高学习效率。使用肌肉张力控制来实现人类或机器人的姿势控制,这是动作-评价器 RL(ACRL)的模拟。然而,为了获得更好的肌肉控制策略来实现理想的姿势,这需要非常高的计算成本。为了实现高效的 ACRL,我们专注于体现,这有望在人工智能或机器人学的研究领域实现高效控制。根据使用动物或人类进行实验研究获得的运动控制神经生理学,脚桥核被盖部(PPTn)抑制肌肉张力,中脑运动区(MLR)促进肌肉张力。PPTn 和 MLR 调节相互拮抗的肌肉(如屈肌和伸肌)的激活水平,控制信号通过黑质网状部传递到脑干。因此,我们假设 PPTn 和 MLR 可以控制肌肉张力,即使用每个肌肉的不同 sigmoid 函数来控制相互拮抗的肌肉的最大激活水平;然后,我们为每个肌肉引入了 PPTn 和 MLR 的拮抗作用函数模型(AFMs),将假设纳入基于 ACRL 中的评价器的输出来确定每个肌肉的激活水平的过程中。代表肌肉张力体现的具有 AFMs 的 ACRL 在学习早期成功地实现了成年男性右上肢五个关节运动在预定目标角度下的重力姿势稳定,而没有 AFMs 的学习方法则无法实现。这项研究的结果表明,引入肌肉张力体现可以提高人类或类人机器人姿势稳定障碍的学习效率。