• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

多智能体强化学习中的奇异驱动探索。

Strangeness-driven exploration in multi-agent reinforcement learning.

机构信息

Future Convergence Engineering, Department of Computer Science and Engineering, Korea University of Technology and Education, Cheonan, 31253, Republic of Korea.

出版信息

Neural Netw. 2024 Apr;172:106149. doi: 10.1016/j.neunet.2024.106149. Epub 2024 Jan 26.

DOI:10.1016/j.neunet.2024.106149
PMID:38306786
Abstract

In this study, a novel exploration method for centralized training and decentralized execution (CTDE)-based multi-agent reinforcement learning (MARL) is introduced. The method uses the concept of strangeness, which is determined by evaluating (1) the level of the unfamiliarity of the observations an agent encounters and (2) the level of the unfamiliarity of the entire state the agents visit. An exploration bonus, which is derived from the concept of strangeness, is combined with the extrinsic reward obtained from the environment to form a mixed reward, which is then used for training CTDE-based MARL algorithms. Additionally, a separate action-value function is also proposed to prevent the high exploration bonus from overwhelming the sensitivity to extrinsic rewards during MARL training. This separate function is used to design the behavioral policy for generating transitions. The proposed method is not much affected by stochastic transitions commonly observed in MARL tasks and improves the stability of CTDE-based MARL algorithms when used with an exploration method. By providing didactic examples and demonstrating the substantial performance improvement of our proposed exploration method in CTDE-based MARL algorithms, we illustrate the advantages of our approach. These evaluations highlight how our method outperforms state-of-the-art MARL baselines on challenging tasks within the StarCraft II micromanagement benchmark, underscoring its effectiveness in improving MARL.

摘要

在这项研究中,引入了一种基于集中式训练和分散式执行 (CTDE) 的多智能体强化学习 (MARL) 的新探索方法。该方法使用陌生度的概念来确定,这是通过评估 (1) 智能体遇到的观察结果的陌生程度和 (2) 智能体访问的整个状态的陌生程度来确定的。探索奖金是从陌生度的概念中得出的,与从环境中获得的外在奖励相结合,形成混合奖励,然后用于训练基于 CTDE 的 MARL 算法。此外,还提出了一个单独的动作值函数,以防止在 MARL 训练期间高探索奖金淹没对外部奖励的敏感性。该单独的函数用于设计用于生成转换的行为策略。所提出的方法受 MARL 任务中常见的随机转换的影响不大,并且在与探索方法一起使用时可以提高基于 CTDE 的 MARL 算法的稳定性。通过提供说明性示例,并展示我们提出的探索方法在基于 CTDE 的 MARL 算法中的显著性能改进,我们说明了我们方法的优势。这些评估突出了我们的方法如何在星际争霸 II 微观管理基准中的挑战性任务上胜过最先进的 MARL 基线,强调了它在提高 MARL 方面的有效性。

相似文献

1
Strangeness-driven exploration in multi-agent reinforcement learning.多智能体强化学习中的奇异驱动探索。
Neural Netw. 2024 Apr;172:106149. doi: 10.1016/j.neunet.2024.106149. Epub 2024 Jan 26.
2
LJIR: Learning Joint-Action Intrinsic Reward in cooperative multi-agent reinforcement learning.LJIR:在合作多智能体强化学习中学习联合行动内在奖励
Neural Netw. 2023 Oct;167:450-459. doi: 10.1016/j.neunet.2023.08.016. Epub 2023 Aug 22.
3
Hierarchical task network-enhanced multi-agent reinforcement learning: Toward efficient cooperative strategies.分层任务网络增强的多智能体强化学习:迈向高效协作策略
Neural Netw. 2025 Jun;186:107254. doi: 10.1016/j.neunet.2025.107254. Epub 2025 Feb 11.
4
Credit assignment with predictive contribution measurement in multi-agent reinforcement learning.多智能体强化学习中的信用分配与预测贡献度量。
Neural Netw. 2023 Jul;164:681-690. doi: 10.1016/j.neunet.2023.05.021. Epub 2023 May 20.
5
An off-policy multi-agent stochastic policy gradient algorithm for cooperative continuous control.一种用于合作连续控制的离策略多智能体随机策略梯度算法。
Neural Netw. 2024 Feb;170:610-621. doi: 10.1016/j.neunet.2023.11.046. Epub 2023 Nov 23.
6
MuDE: Multi-agent decomposed reward-based exploration.MuDE:基于多代理分解奖励的探索。
Neural Netw. 2024 Nov;179:106565. doi: 10.1016/j.neunet.2024.106565. Epub 2024 Jul 22.
7
Coordination as inference in multi-agent reinforcement learning.多智能体强化学习中的协调作为推理。
Neural Netw. 2024 Apr;172:106101. doi: 10.1016/j.neunet.2024.106101. Epub 2024 Jan 11.
8
TIMAR: Transition-informed representation for sample-efficient multi-agent reinforcement learning.TIMAR:用于样本高效多智能体强化学习的转换感知表示
Neural Netw. 2025 Apr;184:107081. doi: 10.1016/j.neunet.2024.107081. Epub 2024 Dec 31.
9
HyperComm: Hypergraph-based communication in multi-agent reinforcement learning.超通讯:多智能体强化学习中的基于超图的通讯。
Neural Netw. 2024 Oct;178:106432. doi: 10.1016/j.neunet.2024.106432. Epub 2024 Jun 10.
10
SMIX(λ): Enhancing Centralized Value Functions for Cooperative Multiagent Reinforcement Learning.SMIX(λ):增强用于协作多智能体强化学习的集中式价值函数
IEEE Trans Neural Netw Learn Syst. 2023 Jan;34(1):52-63. doi: 10.1109/TNNLS.2021.3089493. Epub 2023 Jan 5.