Duan Jingliang, Wang Wenxuan, Xiao Liming, Gao Jiaxin, Li Shengbo Eben, Liu Chang, Zhang Ya-Qin, Cheng Bo, Li Keqiang
IEEE Trans Pattern Anal Mach Intell. 2025 May;47(5):3935-3946. doi: 10.1109/TPAMI.2025.3537087. Epub 2025 Apr 8.
Reinforcement learning (RL) has shown remarkable success in solving complex decision-making and control tasks. However, many model-free RL algorithms experience performance degradation due to inaccurate value estimation, particularly the overestimation of Q-values, which can lead to suboptimal policies. To address this issue, we previously proposed the Distributional Soft Actor-Critic (DSAC or DSACv1), an off-policy RL algorithm that enhances value estimation accuracy by learning a continuous Gaussian value distribution. Despite its effectiveness, DSACv1 faces challenges such as training instability and sensitivity to reward scaling, caused by high variance in critic gradients due to return randomness. In this paper, we introduce three key refinements to DSACv1 to overcome these limitations and further improve Q-value estimation accuracy: expected value substitution, twin value distribution learning, and variance-based critic gradient adjustment. The enhanced algorithm, termed DSAC with Three refinements (DSAC-T or DSACv2), is systematically evaluated across a diverse set of benchmark tasks. Without the need for task-specific hyperparameter tuning, DSAC-T consistently matches or outperforms leading model-free RL algorithms, including SAC, TD3, DDPG, TRPO, and PPO, in all tested environments. Additionally, DSAC-T ensures a stable learning process and maintains robust performance across varying reward scales. Its effectiveness is further demonstrated through real-world application in controlling a wheeled robot, highlighting its potential for deployment in practical robotic tasks.
强化学习(RL)在解决复杂决策和控制任务方面已取得显著成功。然而,许多无模型RL算法由于价值估计不准确,特别是Q值的高估,导致性能下降,这可能会导致次优策略。为了解决这个问题,我们之前提出了分布软演员-评论家算法(DSAC或DSACv1),这是一种离策略RL算法,通过学习连续高斯价值分布来提高价值估计的准确性。尽管DSACv1很有效,但由于回报随机性导致评论家梯度的高方差,它面临着训练不稳定和对奖励缩放敏感等挑战。在本文中,我们对DSACv1进行了三项关键改进,以克服这些限制并进一步提高Q值估计的准确性:期望值替换、双价值分布学习和基于方差的评论家梯度调整。改进后的算法称为带三项改进的DSAC(DSAC-T或DSACv2),在各种基准任务中进行了系统评估。无需针对特定任务进行超参数调整,DSAC-T在所有测试环境中始终与领先的无模型RL算法(包括SAC、TD3、DDPG、TRPO和PPO)相匹配或表现更优。此外,DSAC-T确保了稳定的学习过程,并在不同的奖励尺度上保持了强大的性能。通过在控制轮式机器人的实际应用中进一步证明了其有效性,突出了其在实际机器人任务中部署的潜力。