Suppr超能文献

Discretizing Continuous Action Space With Unimodal Probability Distributions for On-Policy Reinforcement Learning.

作者信息

Zhu Yuanyang, Wang Zhi, Zhu Yuanheng, Chen Chunlin, Zhao Dongbin

出版信息

IEEE Trans Neural Netw Learn Syst. 2025 Jun;36(6):11285-11297. doi: 10.1109/TNNLS.2024.3446371.

Abstract

For on-policy reinforcement learning (RL), discretizing action space for continuous control can easily express multiple modes and is straightforward to optimize. However, without considering the inherent ordering between the discrete atomic actions, the explosion in the number of discrete actions can possess undesired properties and induce a higher variance for the policy gradient (PG) estimator. In this article, we introduce a straightforward architecture that addresses this issue by constraining the discrete policy to be unimodal using Poisson probability distributions. This unimodal architecture can better leverage the continuity in the underlying continuous action space using explicit unimodal probability distributions. We conduct extensive experiments to show that the discrete policy with the unimodal probability distribution provides significantly faster convergence and higher performance for on-policy RL algorithms in challenging control tasks, especially in highly complex tasks such as Humanoid. We provide theoretical analysis on the variance of the PG estimator, which suggests that our attentively designed unimodal discrete policy can retain a lower variance and yield a stable learning process.

摘要

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验