Electronics and Telecommunications Research Institute (ETRI), 218 Gajeong-ro, Yuseong-gu, Daejeon, 34129, South Korea.
Electronics and Telecommunications Research Institute (ETRI), 218 Gajeong-ro, Yuseong-gu, Daejeon, 34129, South Korea.
Neural Netw. 2024 Nov;179:106565. doi: 10.1016/j.neunet.2024.106565. Epub 2024 Jul 22.
In cooperative multi-agent reinforcement learning, agents jointly optimize a centralized value function based on the rewards shared by all agents and learn decentralized policies through value function decomposition. Although such a learning framework is considered effective, estimating individual contribution from the rewards, which is essential for learning highly cooperative behaviors, is difficult. In addition, it becomes more challenging when reinforcement and punishment, help in increasing or decreasing the specific behaviors of agents, coexist because the processes of maximizing reinforcement and minimizing punishment can often conflict in practice. This study proposes a novel exploration scheme called multi-agent decomposed reward-based exploration (MuDE), which preferably explores the action spaces associated with positive sub-rewards based on a modified reward decomposition scheme, thus effectively exploring action spaces not reachable by existing exploration schemes. We evaluate MuDE with a challenging set of StarCraft II micromanagement and modified predator-prey tasks extended to include reinforcement and punishment. The results show that MuDE accurately estimates sub-rewards and outperforms state-of-the-art approaches in both convergence speed and win rates.
在协同多智能体强化学习中,智能体基于所有智能体共享的奖励共同优化一个集中的价值函数,并通过价值函数分解学习分散的策略。尽管这种学习框架被认为是有效的,但从奖励中估计个体贡献对于学习高度合作的行为是很困难的。此外,当强化和惩罚共存时,这变得更加具有挑战性,因为强化和惩罚有助于增加或减少智能体的特定行为,而最大化强化和最小化惩罚的过程在实践中往往会发生冲突。本研究提出了一种名为多智能体分解奖励探索(MuDE)的新探索方案,该方案基于修改后的奖励分解方案,优先探索与正子奖励相关的动作空间,从而有效地探索现有的探索方案无法到达的动作空间。我们使用具有挑战性的 StarCraft II 微观管理任务集和扩展到包含强化和惩罚的修改版捕食者-猎物任务来评估 MuDE。结果表明,MuDE 能够准确地估计子奖励,并且在收敛速度和胜率方面都优于最先进的方法。