• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

离线策略梯度算法的温和策略评估

Mild Policy Evaluation for Offline Actor-Critic.

作者信息

Huang Longyang, Dong Botao, Lu Jinhui, Zhang Weidong

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 Dec;35(12):17950-17964. doi: 10.1109/TNNLS.2023.3309906. Epub 2024 Dec 2.

DOI:10.1109/TNNLS.2023.3309906
PMID:37676802
Abstract

In offline actor-critic (AC) algorithms, the distributional shift between the training data and target policy causes optimistic value estimates for out-of-distribution (OOD) actions. This leads to learned policies skewed toward OOD actions with falsely high values. The existing value-regularized offline AC algorithms address this issue by learning a conservative value function, leading to a performance drop. In this article, we propose a mild policy evaluation (MPE) by constraining the difference between the values of actions supported by the target policy and those of actions contained within the offline dataset. The convergence of the proposed MPE, the gap between the learned value function and the true one, and the suboptimality of the offline AC with MPE are analyzed, respectively. A mild offline AC (MOAC) algorithm is developed by integrating MPE into off-policy AC. Compared with existing offline AC algorithms, the value function gap of MOAC is bounded by the existence of sampling errors. Moreover, in the absence of sampling errors, the true state value function can be obtained. Experimental results on the D4RL benchmark dataset demonstrate the effectiveness of MPE and the performance superiority of MOAC compared to the state-of-the-art offline reinforcement learning (RL) algorithms.

摘要

在离线演员-评论家(AC)算法中,训练数据与目标策略之间的分布偏移会导致对分布外(OOD)动作的乐观价值估计。这会使学习到的策略偏向具有虚假高值的OOD动作。现有的价值正则化离线AC算法通过学习保守的价值函数来解决这个问题,导致性能下降。在本文中,我们提出了一种温和策略评估(MPE)方法,通过约束目标策略支持的动作值与离线数据集中包含的动作值之间的差异。分别分析了所提出的MPE的收敛性、学习到的价值函数与真实价值函数之间的差距以及带有MPE的离线AC的次优性。通过将MPE集成到离策略AC中,开发了一种温和离线AC(MOAC)算法。与现有的离线AC算法相比,MOAC的价值函数差距受采样误差存在的限制。此外,在没有采样误差的情况下,可以获得真实状态价值函数。在D4RL基准数据集上的实验结果证明了MPE的有效性以及MOAC与最先进的离线强化学习(RL)算法相比的性能优越性。

相似文献

1
Mild Policy Evaluation for Offline Actor-Critic.离线策略梯度算法的温和策略评估
IEEE Trans Neural Netw Learn Syst. 2024 Dec;35(12):17950-17964. doi: 10.1109/TNNLS.2023.3309906. Epub 2024 Dec 2.
2
Offline Reinforcement Learning With Behavior Value Regularization.基于行为值正则化的离线强化学习
IEEE Trans Cybern. 2024 Jun;54(6):3692-3704. doi: 10.1109/TCYB.2024.3385910. Epub 2024 May 30.
3
Efficient Offline Reinforcement Learning With Relaxed Conservatism.基于松弛保守主义的高效离线强化学习
IEEE Trans Pattern Anal Mach Intell. 2024 Aug;46(8):5260-5272. doi: 10.1109/TPAMI.2024.3364844. Epub 2024 Jul 2.
4
De-Pessimism Offline Reinforcement Learning via Value Compensation.通过价值补偿实现的离线强化学习去悲观化
IEEE Trans Neural Netw Learn Syst. 2024 Aug 23;PP. doi: 10.1109/TNNLS.2024.3443082.
5
Adaptive pessimism via target Q-value for offline reinforcement learning.基于目标 Q 值的离线强化学习自适应悲观主义。
Neural Netw. 2024 Dec;180:106588. doi: 10.1016/j.neunet.2024.106588. Epub 2024 Aug 5.
6
Actor-Critic Alignment for Offline-to-Online Reinforcement Learning.用于离线到在线强化学习的演员-评论家对齐
Proc Mach Learn Res. 2023 Jul;202:40452-40474.
7
False Correlation Reduction for Offline Reinforcement Learning.离线强化学习中的虚假相关性降低
IEEE Trans Pattern Anal Mach Intell. 2024 Feb;46(2):1199-1211. doi: 10.1109/TPAMI.2023.3328397. Epub 2024 Jan 8.
8
Relative importance sampling for off-policy actor-critic in deep reinforcement learning.深度强化学习中离策略演员-评论家的相对重要性采样
Sci Rep. 2025 Apr 24;15(1):14349. doi: 10.1038/s41598-025-96201-5.
9
Monotonic Quantile Network for Worst-Case Offline Reinforcement Learning.用于最坏情况离线强化学习的单调分位数网络
IEEE Trans Neural Netw Learn Syst. 2024 Jul;35(7):8954-8968. doi: 10.1109/TNNLS.2022.3217189. Epub 2024 Jul 8.
10
Improving Offline Reinforcement Learning With In-Sample Advantage Regularization for Robot Manipulation.通过样本内优势正则化改进用于机器人操作的离线强化学习
IEEE Trans Neural Netw Learn Syst. 2024 Sep 20;PP. doi: 10.1109/TNNLS.2024.3443102.