• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

分层注意力主从式异构多智能体强化学习。

Hierarchical Attention Master-Slave for heterogeneous multi-agent reinforcement learning.

机构信息

College of Information Science and Engineering, Northeastern University, No. 3-11, Wenhua Road, Heping District, Shenyang, 110819, Liaoning, PR China.

College of Information Science and Engineering, Northeastern University, No. 3-11, Wenhua Road, Heping District, Shenyang, 110819, Liaoning, PR China.

出版信息

Neural Netw. 2023 May;162:359-368. doi: 10.1016/j.neunet.2023.02.037. Epub 2023 Mar 4.

DOI:10.1016/j.neunet.2023.02.037
PMID:36940496
Abstract

Most multi-agent reinforcement learning (MARL) approaches optimize strategy by improving itself, while ignoring the limitations of homogeneous agents that may have single function. However, in reality, the complex tasks tend to coordinate various types of agents and leverage advantages from one another. Therefore, it is a vital research issue how to establish appropriate communication among them and optimize decision. To this end, we propose a Hierarchical Attention Master-Slave (HAMS) MARL, where the Hierarchical Attention balances the weight allocation within and among clusters, and the Master-Slave architecture endows agents independent reasoning and individual guidance. By the offered design, information fusion, especially among clusters, is implemented effectively, and excessive communication is avoided, moreover, selective composed action optimizes decision. We evaluate the HAMS on both small and large scale heterogeneous StarCraft II micromanagement tasks. The proposed algorithm achieves the exceptional performance with more than 80% win rates in all evaluation scenarios, which obtains an impressive win rate of over 90% in the largest map. The experiments demonstrate a maximum improvement in win rate of 47% over the best known algorithm. The results show that our proposal outperforms recent state-of-the-art approaches, which provides a novel idea for heterogeneous multi-agent policy optimization.

摘要

大多数多智能体强化学习 (MARL) 方法通过改进自身来优化策略,而忽略了同质智能体可能具有单一功能的局限性。然而,在现实中,复杂的任务往往需要协调各种类型的智能体,并相互利用各自的优势。因此,如何在它们之间建立适当的通信并优化决策是一个重要的研究问题。为此,我们提出了一种分层注意力主从 (HAMS) MARL 方法,其中分层注意力平衡了簇内和簇间的权重分配,主从架构赋予了智能体独立的推理和个体指导。通过所提供的设计,信息融合,特别是在簇间,得到了有效实现,避免了过度的通信,并且选择性组合的动作优化了决策。我们在小尺度和大尺度异构星际争霸 II 微观管理任务上对 HAMS 进行了评估。所提出的算法在所有评估场景中都取得了超过 80%的胜率的优异性能,在最大地图上的胜率超过 90%。实验表明,与最先进的算法相比,我们的方法在胜率上有最大 47%的提高。结果表明,我们的方案优于最近的最先进方法,为异构多智能体策略优化提供了新的思路。

相似文献

1
Hierarchical Attention Master-Slave for heterogeneous multi-agent reinforcement learning.分层注意力主从式异构多智能体强化学习。
Neural Netw. 2023 May;162:359-368. doi: 10.1016/j.neunet.2023.02.037. Epub 2023 Mar 4.
2
Strangeness-driven exploration in multi-agent reinforcement learning.多智能体强化学习中的奇异驱动探索。
Neural Netw. 2024 Apr;172:106149. doi: 10.1016/j.neunet.2024.106149. Epub 2024 Jan 26.
3
HyperComm: Hypergraph-based communication in multi-agent reinforcement learning.超通讯:多智能体强化学习中的基于超图的通讯。
Neural Netw. 2024 Oct;178:106432. doi: 10.1016/j.neunet.2024.106432. Epub 2024 Jun 10.
4
MuDE: Multi-agent decomposed reward-based exploration.MuDE:基于多代理分解奖励的探索。
Neural Netw. 2024 Nov;179:106565. doi: 10.1016/j.neunet.2024.106565. Epub 2024 Jul 22.
5
Credit assignment with predictive contribution measurement in multi-agent reinforcement learning.多智能体强化学习中的信用分配与预测贡献度量。
Neural Netw. 2023 Jul;164:681-690. doi: 10.1016/j.neunet.2023.05.021. Epub 2023 May 20.
6
Coordination as inference in multi-agent reinforcement learning.多智能体强化学习中的协调作为推理。
Neural Netw. 2024 Apr;172:106101. doi: 10.1016/j.neunet.2024.106101. Epub 2024 Jan 11.
7
Optimistic sequential multi-agent reinforcement learning with motivational communication.带有激励性沟通的乐观序贯多智能体强化学习。
Neural Netw. 2024 Nov;179:106547. doi: 10.1016/j.neunet.2024.106547. Epub 2024 Jul 22.
8
LJIR: Learning Joint-Action Intrinsic Reward in cooperative multi-agent reinforcement learning.LJIR:在合作多智能体强化学习中学习联合行动内在奖励
Neural Netw. 2023 Oct;167:450-459. doi: 10.1016/j.neunet.2023.08.016. Epub 2023 Aug 22.
9
Generative subgoal oriented multi-agent reinforcement learning through potential field.基于势场的面向生成子目标的多智能体强化学习。
Neural Netw. 2024 Nov;179:106552. doi: 10.1016/j.neunet.2024.106552. Epub 2024 Jul 17.
10
Sample-efficient multi-agent reinforcement learning with masked reconstruction.基于掩蔽重建的高效多智能体强化学习。
PLoS One. 2023 Sep 14;18(9):e0291545. doi: 10.1371/journal.pone.0291545. eCollection 2023.

引用本文的文献

1
Enhanced hierarchical attention mechanism for mixed MIL in automatic Gleason grading and scoring.用于自动Gleason分级和评分中混合多实例学习的增强分层注意力机制
Sci Rep. 2025 May 8;15(1):15980. doi: 10.1038/s41598-025-00048-9.