Suppr超能文献

基于最佳响应策略的分布式多智能体强化学习

Decentralized multi-agent reinforcement learning based on best-response policies.

作者信息

Gabler Volker, Wollherr Dirk

机构信息

Chair of Automatic Control Engineering, TUM School of Computation, Information and Technology, Technical University of Munich, Munich, Germany.

出版信息

Front Robot AI. 2024 Apr 16;11:1229026. doi: 10.3389/frobt.2024.1229026. eCollection 2024.

Abstract

Multi-agent systems are an interdisciplinary research field that describes the concept of multiple decisive individuals interacting with a usually partially observable environment. Given the recent advances in single-agent reinforcement learning, multi-agent reinforcement learning (RL) has gained tremendous interest in recent years. Most research studies apply a fully centralized learning scheme to ease the transfer from the single-agent domain to multi-agent systems. In contrast, we claim that a decentralized learning scheme is preferable for applications in real-world scenarios as this allows deploying a learning algorithm on an individual robot rather than deploying the algorithm to a complete fleet of robots. Therefore, this article outlines a novel actor-critic (AC) approach tailored to cooperative MARL problems in sparsely rewarded domains. Our approach decouples the MARL problem into a set of distributed agents that model the other agents as responsive entities. In particular, we propose using two separate critics per agent to distinguish between the joint task reward and agent-based costs as commonly applied within multi-robot planning. On one hand, the agent-based critic intends to decrease agent-specific costs. On the other hand, each agent intends to optimize the joint team reward based on the joint task critic. As this critic still depends on the joint action of all agents, we outline two suitable behavior models based on Stackelberg games: a game against nature and a dyadic game against each agent. Following these behavior models, our algorithm allows fully decentralized execution and training. We evaluate our presented method using the proposed behavior models within a sparsely rewarded simulated multi-agent environment. Although our approach already outperforms the state-of-the-art learners, we conclude this article by outlining possible extensions of our algorithm that future research may build upon.

摘要

多智能体系统是一个跨学科研究领域,它描述了多个具有决策能力的个体与通常部分可观察环境进行交互的概念。鉴于单智能体强化学习的最新进展,多智能体强化学习(RL)近年来引起了极大的关注。大多数研究采用完全集中式学习方案,以简化从单智能体领域到多智能体系统的转换。相比之下,我们认为分散式学习方案更适合实际场景中的应用,因为这允许在单个机器人上部署学习算法,而不是将算法部署到整个机器人机队。因此,本文概述了一种专门针对稀疏奖励领域中的合作多智能体强化学习(MARL)问题的新型演员-评论家(AC)方法。我们的方法将MARL问题解耦为一组分布式智能体,这些智能体将其他智能体建模为响应实体。特别是,我们建议每个智能体使用两个单独的评论家,以区分联合任务奖励和多机器人规划中常用的基于智能体的成本。一方面,基于智能体的评论家旨在降低特定于智能体的成本。另一方面,每个智能体旨在根据联合任务评论家优化联合团队奖励。由于这个评论家仍然依赖于所有智能体的联合行动,我们概述了基于斯塔克尔伯格博弈的两种合适的行为模型:与自然的博弈和与每个智能体的二元博弈。遵循这些行为模型,我们的算法允许完全分散式执行和训练。我们在稀疏奖励的模拟多智能体环境中使用所提出的行为模型评估我们提出的方法。尽管我们的方法已经优于当前的先进学习者,但我们在本文结尾概述了我们算法可能的扩展,未来的研究可以在此基础上进行。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9277/11059992/46f58e034ced/frobt-11-1229026-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验