Liu Kai, Zhang Tianxian, Xu Xiangliang, Zhao Yuyang
School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, Sichuan, China.
School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, Sichuan, China.
Neural Netw. 2025 Oct;190:107692. doi: 10.1016/j.neunet.2025.107692. Epub 2025 Jun 16.
Value decomposition has become a central focus in Multi-Agent Reinforcement Learning (MARL) in recent years. The key challenge lies in the construction and updating of the factored value function (FVF). Traditional methods rely on FVFs with restricted representational capacity, rendering them inadequate for tasks with non-monotonic payoffs. Recent approaches address this limitation by designing FVF update mechanisms that enable applicability to non-monotonic scenarios. However, these methods typically depend on the true optimal joint action value to guide FVF updates. Since the true optimal joint action is computationally infeasible in practice, these methods approximate it using the greedy joint action and update the FVF with the corresponding greedy joint action value. We observe that although the greedy joint action may be close to the true optimal joint action, its associated greedy joint action value can be substantially biased relative to the true optimal joint action value. This makes the approximation unreliable and can lead to incorrect update directions for the FVF, hindering the learning process. To overcome this limitation, we propose Comix, a novel off-policy MARL method based on a Sandwich Value Decomposition Framework. Comix constrains and guides FVF updates using both upper and lower bounds. Specifically, it leverages orthogonal best responses to construct the upper bound, thus overcoming the drawbacks introduced by the optimal approximation. Furthermore, an attention mechanism is incorporated to ensure that the upper bound can be computed with linear time complexity and high accuracy. Theoretical analyses show that Comix satisfies the IGM. Experiments on the asymmetric One-Step Matrix Game, discrete Predator-Prey, and StarCraft Multi-Agent Challenge show that Comix achieves higher learning efficiency and outperforms several state-of-the-art methods.
近年来,值分解已成为多智能体强化学习(MARL)的核心焦点。关键挑战在于因式分解值函数(FVF)的构建和更新。传统方法依赖于具有受限表示能力的FVF,使其不足以处理具有非单调收益的任务。最近的方法通过设计FVF更新机制来解决这一限制,该机制能够适用于非单调场景。然而,这些方法通常依赖于真正的最优联合动作值来指导FVF更新。由于在实际中计算真正的最优联合动作是不可行的,这些方法使用贪婪联合动作对其进行近似,并使用相应的贪婪联合动作值来更新FVF。我们观察到,尽管贪婪联合动作可能接近真正的最优联合动作,但其相关的贪婪联合动作值相对于真正的最优联合动作值可能存在显著偏差。这使得近似不可靠,并可能导致FVF的更新方向错误,从而阻碍学习过程。为了克服这一限制,我们提出了Comix,这是一种基于三明治值分解框架的新型离策略MARL方法。Comix使用上下界来约束和指导FVF更新。具体而言,它利用正交最佳响应来构建上界,从而克服了最优近似引入的缺点。此外,还引入了一种注意力机制,以确保能够以线性时间复杂度和高精度计算上界。理论分析表明Comix满足IGM。在非对称一步矩阵博弈、离散捕食者 - 猎物和星际争霸多智能体挑战赛上的实验表明,Comix实现了更高的学习效率,并且优于几种现有最先进的方法。