Suppr超能文献

基于目标 Q 值的离线强化学习自适应悲观主义。

Adaptive pessimism via target Q-value for offline reinforcement learning.

机构信息

The Chinese University of Hong Kong, Shatin, NT, Hong Kong Special Administrative Region of China; Shanghai Artificial Intelligence Laboratory, No. 701, Yunjin Road, Shanghai, China.

Shanghai Artificial Intelligence Laboratory, No. 701, Yunjin Road, Shanghai, China; The University of Sydney, Sydney, 2006, NSW, Australia.

出版信息

Neural Netw. 2024 Dec;180:106588. doi: 10.1016/j.neunet.2024.106588. Epub 2024 Aug 5.

Abstract

Offline reinforcement learning (RL) methods learn from datasets without further environment interaction, facing errors due to out-of-distribution (OOD) actions. Although effective methods have been proposed to conservatively estimate the Q-values of those OOD actions to mitigate this problem, insufficient or excessive pessimism under constant constraints often harms the policy learning process. Moreover, since the distribution of each task on the dataset varies among different environments and behavior policies, it is desirable to learn an adaptive weight for balancing constraints on the conservative estimation of Q-value and the standard RL objectives depending on each task. To achieve this, in this paper, we point out that the quantile of the Q-value is an effective metric to refer to the Q-value distribution of the fixed data set. Based on this observation, we design Adaptive Pessimism via a Target Q-value (APTQ) algorithm that balances between the pessimism constraint and the RL objective; this leads the expectation of Q-value to stably converge to a given target Q-value from a reasonable quantile of the Q-value distribution of the dataset. Experiments show that our method remarkably improves the performance of the state-of-the-art method CQL by 6.20% on the D4RL-v0 and 1.89% on the D4RL-v2.

摘要

离线强化学习 (RL) 方法从无需进一步环境交互的数据集学习,由于分布外 (OOD) 动作而面临错误。尽管已经提出了有效的方法来保守地估计这些 OOD 动作的 Q 值以减轻此问题,但在恒定约束下的不足或过度悲观常常会损害策略学习过程。此外,由于数据集中每个任务的分布在不同环境和行为策略之间有所不同,因此根据每个任务学习用于平衡 Q 值保守估计和标准 RL 目标的约束的自适应权重是可取的。为了实现这一点,在本文中,我们指出 Q 值的分位数是指固定数据集的 Q 值分布的有效指标。基于此观察,我们设计了通过目标 Q 值的自适应悲观 (APTQ) 算法来平衡悲观主义约束和 RL 目标之间的平衡;这使得 Q 值的期望从数据集的 Q 值分布的合理分位数稳定地收敛到给定的目标 Q 值。实验表明,我们的方法在 D4RL-v0 上比最先进的 CQL 方法提高了 6.20%,在 D4RL-v2 上提高了 1.89%。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验