Wu Jingda, Zhou Yanxin, Yang Haohan, Huang Zhiyu, Lv Chen
IEEE Trans Pattern Anal Mach Intell. 2023 Dec;45(12):14745-14759. doi: 10.1109/TPAMI.2023.3314762. Epub 2023 Nov 3.
Reinforcement learning (RL) is a promising approach in unmanned ground vehicles (UGVs) applications, but limited computing resource makes it challenging to deploy a well-behaved RL strategy with sophisticated neural networks. Meanwhile, the training of RL on navigation tasks is difficult, which requires a carefully-designed reward function and a large number of interactions, yet RL navigation can still fail due to many corner cases. This shows the limited intelligence of current RL methods, thereby prompting us to rethink combining RL with human intelligence. In this paper, a human-guided RL framework is proposed to improve RL performance both during learning in the simulator and deployment in the real world. The framework allows humans to intervene in RL's control progress and provide demonstrations as needed, thereby improving RL's capabilities. An innovative human-guided RL algorithm is proposed that utilizes a series of mechanisms to improve the effectiveness of human guidance, including human-guided learning objective, prioritized human experience replay, and human intervention-based reward shaping. Our RL method is trained in simulation and then transferred to the real world, and we develop a denoised representation for domain adaptation to mitigate the simulation-to-real gap. Our method is validated through simulations and real-world experiments to navigate UGVs in diverse and dynamic environments based only on tiny neural networks and image inputs. Our method performs better in goal-reaching and safety than existing learning- and model-based navigation approaches and is robust to changes in input features and ego kinetics. Furthermore, our method allows small-scale human demonstrations to be used to improve the trained RL agent and learn expected behaviors online.
强化学习(RL)在无人地面车辆(UGV)应用中是一种很有前景的方法,但有限的计算资源使得部署具有复杂神经网络的良好行为RL策略具有挑战性。同时,RL在导航任务上的训练也很困难,这需要精心设计的奖励函数和大量的交互,然而由于许多极端情况,RL导航仍然可能失败。这显示了当前RL方法的智能有限,从而促使我们重新思考将RL与人类智能相结合。本文提出了一种人工引导的RL框架,以在模拟器中的学习和在现实世界中的部署过程中提高RL性能。该框架允许人类干预RL的控制进程并根据需要提供示范,从而提高RL的能力。提出了一种创新的人工引导RL算法,该算法利用一系列机制来提高人工引导的有效性,包括人工引导的学习目标、优先的人工经验回放和基于人工干预的奖励塑造。我们的RL方法在模拟中进行训练,然后转移到现实世界中,并且我们开发了一种用于域适应的去噪表示,以减轻模拟到现实的差距。我们的方法通过模拟和实际实验进行了验证,以仅基于微小的神经网络和图像输入在多样且动态的环境中导航UGV。我们的方法在目标达成和安全性方面比现有的基于学习和模型的导航方法表现更好,并且对输入特征和自身动力学的变化具有鲁棒性。此外,我们的方法允许使用小规模的人工示范来改进训练后的RL智能体并在线学习预期行为。