Cheng Zhihao, Shen Li, Zhu Miaoxi, Guo Jiaxian, Fang Meng, Liu Liu, Du Bo, Tao Dacheng
IEEE Trans Pattern Anal Mach Intell. 2023 Oct;45(10):12236-12249. doi: 10.1109/TPAMI.2023.3287908. Epub 2023 Sep 5.
Existing safe imitation learning (safe IL) methods mainly focus on learning safe policies that are similar to expert ones, but may fail in applications requiring different safety constraints. In this paper, we propose the Lagrangian Generative Adversarial Imitation Learning (LGAIL) algorithm, which can adaptively learn safe policies from a single expert dataset under diverse prescribed safety constraints. To achieve this, we augment GAIL with safety constraints and then relax it as an unconstrained optimization problem by utilizing a Lagrange multiplier. The Lagrange multiplier enables explicit consideration of the safety and is dynamically adjusted to balance the imitation and safety performance during training. Then, we apply a two-stage optimization framework to solve LGAIL: (1) a discriminator is optimized to measure the similarity between the agent-generated data and the expert ones; (2) forward reinforcement learning is employed to improve the similarity while considering safety concerns enabled by a Lagrange multiplier. Furthermore, theoretical analyses on the convergence and safety of LGAIL demonstrate its capability of adaptively learning a safe policy given prescribed safety constraints. At last, extensive experiments in OpenAI Safety Gym conclude the effectiveness of our approach.
现有的安全模仿学习(安全IL)方法主要侧重于学习与专家策略相似的安全策略,但在需要不同安全约束的应用中可能会失败。在本文中,我们提出了拉格朗日生成对抗模仿学习(LGAIL)算法,该算法可以在不同的规定安全约束下,从单个专家数据集中自适应地学习安全策略。为了实现这一点,我们用安全约束增强GAIL,然后通过使用拉格朗日乘数将其松弛为一个无约束优化问题。拉格朗日乘数能够明确考虑安全性,并在训练过程中动态调整以平衡模仿和安全性能。然后,我们应用一个两阶段优化框架来求解LGAIL:(1)优化一个鉴别器以测量智能体生成的数据与专家数据之间的相似度;(2)采用前向强化学习来提高相似度,同时考虑由拉格朗日乘数实现的安全问题。此外,对LGAIL的收敛性和安全性的理论分析证明了其在给定规定安全约束下自适应学习安全策略的能力。最后,在OpenAI安全健身房进行的大量实验证明了我们方法的有效性。