Zhang Lihua, Liu Quan, Zhu Fei, Huang Zhigang
School of Computer Science and Technology, Soochow University, Suzhou, 215006, Jiangsu, China.
School of Computer Science and Technology, Soochow University, Suzhou, 215006, Jiangsu, China; Provincial Key Laboratory for Computer Information Processing Technology, Soochow University, Suzhou, 215006, Jiangsu, China.
Neural Netw. 2023 Oct;167:847-864. doi: 10.1016/j.neunet.2023.08.058. Epub 2023 Sep 4.
Adversarial imitation learning (AIL) is a powerful method for automated decision systems due to training a policy efficiently by mimicking expert demonstrations. However, implicit bias is present in the reward function of these algorithms, which leads to sample inefficiency. To solve this issue, an algorithm, referred to as Mutual Information Generative Adversarial Imitation Learning (MI-GAIL), is proposed to correct the biases. In this study, we propose two guidelines for designing an unbiased reward function. Based on these guidelines, we shape the reward function from the discriminator by adding auxiliary information from a potential-based reward function. The primary insight is that the potential-based reward function provides more accurate rewards for actions identified in the two guidelines. We compare our algorithm with SOTA imitation learning algorithms on a family of continuous control tasks. Experiments results show that MI-GAIL is able to address the issue of bias in AIL reward functions and further improve sample efficiency and training stability.
对抗模仿学习(AIL)是一种用于自动化决策系统的强大方法,因为它通过模仿专家示范来高效地训练策略。然而,这些算法的奖励函数中存在隐式偏差,这导致样本效率低下。为了解决这个问题,提出了一种称为互信息生成对抗模仿学习(MI-GAIL)的算法来纠正偏差。在本研究中,我们提出了两条设计无偏奖励函数的准则。基于这些准则,我们通过添加基于势能的奖励函数的辅助信息来塑造鉴别器的奖励函数。主要观点是,基于势能的奖励函数为两条准则中确定的动作提供了更准确的奖励。我们在一系列连续控制任务上,将我们的算法与最先进的模仿学习算法进行了比较。实验结果表明,MI-GAIL能够解决AIL奖励函数中的偏差问题,并进一步提高样本效率和训练稳定性。