Bai Chenjia, Xiao Ting, Zhu Zhoufan, Wang Lingxiao, Zhou Fan, Garg Animesh, He Bin, Liu Peng, Wang Zhaoran
IEEE Trans Neural Netw Learn Syst. 2024 Jul;35(7):8954-8968. doi: 10.1109/TNNLS.2022.3217189. Epub 2024 Jul 8.
A key challenge in offline reinforcement learning (RL) is how to ensure the learned offline policy is safe, especially in safety-critical domains. In this article, we focus on learning a distributional value function in offline RL and optimizing a worst-case criterion of returns. However, optimizing a distributional value function in offline RL can be hard, since the crossing quantile issue is serious, and the distribution shift problem needs to be addressed. To this end, we propose monotonic quantile network (MQN) with conservative quantile regression (CQR) for risk-averse policy learning. First, we propose an MQN to learn the distribution over returns with non-crossing guarantees of the quantiles. Then, we perform CQR by penalizing the quantile estimation for out-of-distribution (OOD) actions to address the distribution shift in offline RL. Finally, we learn a worst-case policy by optimizing the conditional value-at-risk (CVaR) of the distributional value function. Furthermore, we provide theoretical analysis of the fixed-point convergence in our method. We conduct experiments in both risk-neutral and risk-sensitive offline settings, and the results show that our method obtains safe and conservative behaviors in robotic locomotion tasks.
离线强化学习(RL)中的一个关键挑战是如何确保学习到的离线策略是安全的,尤其是在安全关键领域。在本文中,我们专注于在离线RL中学习分布值函数并优化收益的最坏情况准则。然而,在离线RL中优化分布值函数可能很困难,因为交叉分位数问题严重,并且需要解决分布偏移问题。为此,我们提出了用于风险规避策略学习的具有保守分位数回归(CQR)的单调分位数网络(MQN)。首先,我们提出一个MQN来学习具有分位数非交叉保证的收益分布。然后,我们通过对分布外(OOD)动作的分位数估计进行惩罚来执行CQR,以解决离线RL中的分布偏移问题。最后,我们通过优化分布值函数的条件风险价值(CVaR)来学习最坏情况策略。此外,我们对我们方法中的定点收敛进行了理论分析。我们在风险中性和风险敏感的离线设置中进行了实验,结果表明我们的方法在机器人运动任务中获得了安全和保守的行为。