• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于目标 Q 值的离线强化学习自适应悲观主义。

Adaptive pessimism via target Q-value for offline reinforcement learning.

机构信息

The Chinese University of Hong Kong, Shatin, NT, Hong Kong Special Administrative Region of China; Shanghai Artificial Intelligence Laboratory, No. 701, Yunjin Road, Shanghai, China.

Shanghai Artificial Intelligence Laboratory, No. 701, Yunjin Road, Shanghai, China; The University of Sydney, Sydney, 2006, NSW, Australia.

出版信息

Neural Netw. 2024 Dec;180:106588. doi: 10.1016/j.neunet.2024.106588. Epub 2024 Aug 5.

DOI:10.1016/j.neunet.2024.106588
PMID:39180907
Abstract

Offline reinforcement learning (RL) methods learn from datasets without further environment interaction, facing errors due to out-of-distribution (OOD) actions. Although effective methods have been proposed to conservatively estimate the Q-values of those OOD actions to mitigate this problem, insufficient or excessive pessimism under constant constraints often harms the policy learning process. Moreover, since the distribution of each task on the dataset varies among different environments and behavior policies, it is desirable to learn an adaptive weight for balancing constraints on the conservative estimation of Q-value and the standard RL objectives depending on each task. To achieve this, in this paper, we point out that the quantile of the Q-value is an effective metric to refer to the Q-value distribution of the fixed data set. Based on this observation, we design Adaptive Pessimism via a Target Q-value (APTQ) algorithm that balances between the pessimism constraint and the RL objective; this leads the expectation of Q-value to stably converge to a given target Q-value from a reasonable quantile of the Q-value distribution of the dataset. Experiments show that our method remarkably improves the performance of the state-of-the-art method CQL by 6.20% on the D4RL-v0 and 1.89% on the D4RL-v2.

摘要

离线强化学习 (RL) 方法从无需进一步环境交互的数据集学习,由于分布外 (OOD) 动作而面临错误。尽管已经提出了有效的方法来保守地估计这些 OOD 动作的 Q 值以减轻此问题,但在恒定约束下的不足或过度悲观常常会损害策略学习过程。此外,由于数据集中每个任务的分布在不同环境和行为策略之间有所不同,因此根据每个任务学习用于平衡 Q 值保守估计和标准 RL 目标的约束的自适应权重是可取的。为了实现这一点,在本文中,我们指出 Q 值的分位数是指固定数据集的 Q 值分布的有效指标。基于此观察,我们设计了通过目标 Q 值的自适应悲观 (APTQ) 算法来平衡悲观主义约束和 RL 目标之间的平衡;这使得 Q 值的期望从数据集的 Q 值分布的合理分位数稳定地收敛到给定的目标 Q 值。实验表明,我们的方法在 D4RL-v0 上比最先进的 CQL 方法提高了 6.20%,在 D4RL-v2 上提高了 1.89%。

相似文献

1
Adaptive pessimism via target Q-value for offline reinforcement learning.基于目标 Q 值的离线强化学习自适应悲观主义。
Neural Netw. 2024 Dec;180:106588. doi: 10.1016/j.neunet.2024.106588. Epub 2024 Aug 5.
2
De-Pessimism Offline Reinforcement Learning via Value Compensation.通过价值补偿实现的离线强化学习去悲观化
IEEE Trans Neural Netw Learn Syst. 2024 Aug 23;PP. doi: 10.1109/TNNLS.2024.3443082.
3
Mild Policy Evaluation for Offline Actor-Critic.离线策略梯度算法的温和策略评估
IEEE Trans Neural Netw Learn Syst. 2024 Dec;35(12):17950-17964. doi: 10.1109/TNNLS.2023.3309906. Epub 2024 Dec 2.
4
Offline Reinforcement Learning With Behavior Value Regularization.基于行为值正则化的离线强化学习
IEEE Trans Cybern. 2024 Jun;54(6):3692-3704. doi: 10.1109/TCYB.2024.3385910. Epub 2024 May 30.
5
Improving Offline Reinforcement Learning With In-Sample Advantage Regularization for Robot Manipulation.通过样本内优势正则化改进用于机器人操作的离线强化学习
IEEE Trans Neural Netw Learn Syst. 2024 Sep 20;PP. doi: 10.1109/TNNLS.2024.3443102.
6
Monotonic Quantile Network for Worst-Case Offline Reinforcement Learning.用于最坏情况离线强化学习的单调分位数网络
IEEE Trans Neural Netw Learn Syst. 2024 Jul;35(7):8954-8968. doi: 10.1109/TNNLS.2022.3217189. Epub 2024 Jul 8.
7
Efficient Offline Reinforcement Learning With Relaxed Conservatism.基于松弛保守主义的高效离线强化学习
IEEE Trans Pattern Anal Mach Intell. 2024 Aug;46(8):5260-5272. doi: 10.1109/TPAMI.2024.3364844. Epub 2024 Jul 2.
8
Modeling Bellman-error with logistic distribution with applications in reinforcement learning.使用逻辑分布对贝尔曼误差进行建模及其在强化学习中的应用。
Neural Netw. 2024 Sep;177:106387. doi: 10.1016/j.neunet.2024.106387. Epub 2024 May 15.
9
GFANC-RL: Reinforcement Learning-based Generative Fixed-filter Active Noise Control.基于强化学习的生成式固定滤波器有源噪声控制。
Neural Netw. 2024 Dec;180:106687. doi: 10.1016/j.neunet.2024.106687. Epub 2024 Sep 5.
10
False Correlation Reduction for Offline Reinforcement Learning.离线强化学习中的虚假相关性降低
IEEE Trans Pattern Anal Mach Intell. 2024 Feb;46(2):1199-1211. doi: 10.1109/TPAMI.2023.3328397. Epub 2024 Jan 8.