Wang Chenzheng, Huang Qiang, Chen Xuechao, Zhang Zeyu, Shi Jing
School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China.
Biomimetics (Basel). 2025 Jul 17;10(7):469. doi: 10.3390/biomimetics10070469.
Loco-manipulation tasks using humanoid robots have great practical value in various scenarios. While reinforcement learning (RL) has become a powerful tool for versatile and robust whole-body humanoid control, visuomotor control in loco-manipulation tasks with RL remains a great challenge due to their high dimensionality and long-horizon exploration issues. In this paper, we propose a loco-manipulation control framework for humanoid robots that utilizes model-free RL upon model-based control in the robot's tasks space. It implements a visuomotor policy with depth-image input, and uses mid-way initialization and prioritized experience sampling to accelerate policy convergence. The proposed method is validated on typical loco-manipulation tasks of load carrying and door opening resulting in an overall success rate of 83%, where our framework automatically adjusts the robot motion in reaction to changes in the environment.
使用人形机器人进行的局部操作任务在各种场景中具有很大的实用价值。虽然强化学习(RL)已成为实现通用且强大的全身人形机器人控制的有力工具,但由于其高维度和长期探索问题,在基于强化学习的局部操作任务中的视觉运动控制仍然是一个巨大的挑战。在本文中,我们提出了一种用于人形机器人的局部操作控制框架,该框架在机器人任务空间中的基于模型的控制之上利用无模型强化学习。它实现了具有深度图像输入的视觉运动策略,并使用中途初始化和优先经验采样来加速策略收敛。所提出的方法在典型的负载搬运和开门局部操作任务上得到了验证,总体成功率为83%,在该任务中我们的框架会根据环境变化自动调整机器人的运动。