• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

强化学习中模仿策略和环境的误差界限。

Error Bounds of Imitating Policies and Environments for Reinforcement Learning.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2022 Oct;44(10):6968-6980. doi: 10.1109/TPAMI.2021.3096966. Epub 2022 Sep 14.

DOI:10.1109/TPAMI.2021.3096966
PMID:34260348
Abstract

In sequential decision-making, imitation learning (IL) trains a policy efficiently by mimicking expert demonstrations. Various imitation methods were proposed and empirically evaluated, meanwhile, their theoretical understandings need further studies, among which the compounding error in long-horizon decisions is a major issue. In this paper, we first analyze the value gap between the expert policy and imitated policies by two imitation methods, behavioral cloning (BC) and generative adversarial imitation. The results support that generative adversarial imitation can reduce the compounding error compared to BC. Furthermore, we establish the lower bounds of IL under two settings, suggesting the significance of environment interactions in IL. By considering the environment transition model as a dual agent, IL can also be used to learn the environment model. Therefore, based on the bounds of imitating policies, we further analyze the performance of imitating environments. The results show that environment models can be more effectively imitated by generative adversarial imitation than BC. Particularly, we obtain a policy evaluation error that is linear with the effective planning horizon w.r.t. the model bias, suggesting a novel application of adversarial imitation for model-based reinforcement learning (MBRL). We hope these results could inspire future advances in IL and MBRL.

摘要

在序贯决策中,模仿学习(IL)通过模仿专家演示来有效地训练策略。已经提出了各种模仿方法并进行了实证评估,同时,它们的理论理解需要进一步研究,其中长时决策中的复合误差是一个主要问题。在本文中,我们首先通过两种模仿方法(行为克隆(BC)和生成对抗模仿)分析了专家策略和模仿策略之间的价值差距。结果表明,与 BC 相比,生成对抗模仿可以减少复合误差。此外,我们在两种设置下建立了 IL 的下界,表明环境交互在 IL 中的重要性。通过将环境转换模型视为双代理,IL 也可以用于学习环境模型。因此,基于模仿策略的界限,我们进一步分析了模仿环境的性能。结果表明,与 BC 相比,生成对抗模仿可以更有效地模仿环境模型。特别是,我们获得了一个与模型偏差呈线性关系的策略评估误差,这表明对抗模仿在基于模型的强化学习(MBRL)中有一个新的应用。我们希望这些结果能够激发 IL 和 MBRL 的未来发展。

相似文献

1
Error Bounds of Imitating Policies and Environments for Reinforcement Learning.强化学习中模仿策略和环境的误差界限。
IEEE Trans Pattern Anal Mach Intell. 2022 Oct;44(10):6968-6980. doi: 10.1109/TPAMI.2021.3096966. Epub 2022 Sep 14.
2
Distributional generative adversarial imitation learning with reproducing kernel generalization.基于再生核泛化的分布生成对抗模仿学习。
Neural Netw. 2023 Aug;165:43-59. doi: 10.1016/j.neunet.2023.05.027. Epub 2023 May 25.
3
Addressing implicit bias in adversarial imitation learning with mutual information.利用互信息解决对抗性模仿学习中的隐性偏差。
Neural Netw. 2023 Oct;167:847-864. doi: 10.1016/j.neunet.2023.08.058. Epub 2023 Sep 4.
4
Domain Adaptation for Imitation Learning Using Generative Adversarial Network.基于生成对抗网络的模仿学习的领域自适应。
Sensors (Basel). 2021 Jul 9;21(14):4718. doi: 10.3390/s21144718.
5
BAGAIL: Multi-modal imitation learning from imbalanced demonstrations.贝加尔:基于不平衡演示的多模态模仿学习。
Neural Netw. 2024 Jun;174:106251. doi: 10.1016/j.neunet.2024.106251. Epub 2024 Mar 19.
6
Diverse Imitation Learning via Self-Organizing Generative Models.通过自组织生成模型实现的多样化模仿学习
IEEE Trans Neural Netw Learn Syst. 2025 Apr;36(4):7145-7157. doi: 10.1109/TNNLS.2024.3401170. Epub 2025 Apr 4.
7
Quantum Imitation Learning.量子模仿学习
IEEE Trans Neural Netw Learn Syst. 2024 Oct;35(10):14190-14204. doi: 10.1109/TNNLS.2023.3275075. Epub 2024 Oct 7.
8
Prescribed Safety Performance Imitation Learning From a Single Expert Dataset.从单个专家数据集中进行规定安全性能模仿学习。
IEEE Trans Pattern Anal Mach Intell. 2023 Oct;45(10):12236-12249. doi: 10.1109/TPAMI.2023.3287908. Epub 2023 Sep 5.
9
Bridging the Gap Between Imitation Learning and Inverse Reinforcement Learning.弥合模仿学习与逆强化学习之间的差距。
IEEE Trans Neural Netw Learn Syst. 2017 Aug;28(8):1814-1826. doi: 10.1109/TNNLS.2016.2543000. Epub 2016 May 4.
10
Restored Action Generative Adversarial Imitation Learning from observation for robot manipulator.基于观察的机器人操纵器恢复动作生成对抗模仿学习
ISA Trans. 2022 Oct;129(Pt B):684-690. doi: 10.1016/j.isatra.2022.02.041. Epub 2022 Mar 7.