Xia Bo, Sun Haoyuan, Yuan Bo, Li Zhiheng, Liang Bin, Wang Xueqian
Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China.
Research Institute of Tsinghua University in Shenzhen, Shenzhen, 518057, China.
Neural Netw. 2025 Jan;181:106769. doi: 10.1016/j.neunet.2024.106769. Epub 2024 Oct 1.
In reinforcement learning, the Markov Decision Process (MDP) framework typically operates under a blocking paradigm, assuming a static environment during the agent's decision-making and stationary agent behavior while the environment executes its actions. This static model often proves inadequate for real-time tasks, as it lacks the flexibility to handle concurrent changes in both the agent's decision-making process and the environment's dynamic responses. Contemporary solutions, such as linear interpolation or state space augmentation, attempt to address the asynchronous nature of delayed states and actions in real-time environments. However, these methods frequently require precise delay measurements and may fail to fully capture the complexities of delay dynamics. However, these methods frequently require precise delay measurements and may fail to fully capture the complexities of delay dynamics. To address these challenges, we introduce a minimal information set that encapsulates concurrent information during agent-environment interactions, serving as the foundation of our real-time decision-making framework. The traditional blocking-mode MDP is then reformulated as a Minimal Information State Markov Decision Process (MISMDP), aligning more closely with the demands of real-time environments. Within this MISMDP framework, we propose the "Minimal information set for Real-time tasks using Actor-Critic" (MRAC), a general approach for addressing delay issues in real-time tasks, supported by a rigorous theoretical analysis of Q-function convergence. Extensive experiments across both discrete and continuous action space environments demonstrate that MRAC outperforms state-of-the-art algorithms, delivering superior performance and generalization in managing delays within real-time tasks.
在强化学习中,马尔可夫决策过程(MDP)框架通常在阻塞范式下运行,假设在智能体决策过程中环境是静态的,并且在环境执行其动作时智能体行为是平稳的。这种静态模型对于实时任务往往证明是不够的,因为它缺乏处理智能体决策过程和环境动态响应中并发变化的灵活性。当代的解决方案,如线性插值或状态空间扩充,试图解决实时环境中延迟状态和动作的异步性质。然而,这些方法经常需要精确的延迟测量,并且可能无法完全捕捉延迟动态的复杂性。为了应对这些挑战,我们引入了一个最小信息集,它在智能体与环境交互期间封装并发信息,作为我们实时决策框架的基础。然后将传统的阻塞模式MDP重新表述为最小信息状态马尔可夫决策过程(MISMDP),使其更紧密地符合实时环境的要求。在这个MISMDP框架内,我们提出了“使用演员-评论家的实时任务最小信息集”(MRAC),这是一种解决实时任务中延迟问题的通用方法,并得到了对Q函数收敛的严格理论分析的支持。在离散和连续动作空间环境中的大量实验表明,MRAC优于现有算法,在管理实时任务中的延迟方面具有卓越的性能和泛化能力。