Suppr超能文献

具有内部模型能力的多智能体系统模块化模糊强化学习方法。

Modular fuzzy-reinforcement learning approach with internal model capabilities for multiagent systems.

作者信息

Kaya Mehmet, Alhajj Reda

机构信息

Department of Computer Engineering, Firat University, 23119 Elaziğ, Turkey.

出版信息

IEEE Trans Syst Man Cybern B Cybern. 2004 Apr;34(2):1210-23. doi: 10.1109/tsmcb.2003.821869.

Abstract

To date, many researchers have proposed various methods to improve the learning ability in multiagent systems. However, most of these studies are not appropriate to more complex multiagent learning problems because the state space of each learning agent grows exponentially in terms of the number of partners present in the environment. Modeling other learning agents present in the domain as part of the state of the environment is not a realistic approach. In this paper, we combine advantages of the modular approach, fuzzy logic and the internal model in a single novel multiagent system architecture. The architecture is based on a fuzzy modular approach whose rule base is partitioned into several different modules. Each module deals with a particular agent in the environment and maps the input fuzzy sets to the action Q-values; these represent the state space of each learning module and the action space, respectively. Each module also uses an internal model table to estimate actions of the other agents. Finally, we investigate the integration of a parallel update method with the proposed architecture. Experimental results obtained on two different environments of a well-known pursuit domain show the effectiveness and robustness of the proposed multiagent architecture and learning approach.

摘要

到目前为止,许多研究人员已经提出了各种方法来提高多智能体系统中的学习能力。然而,这些研究大多不适用于更复杂的多智能体学习问题,因为每个学习智能体的状态空间会随着环境中存在的伙伴数量呈指数级增长。将领域中存在的其他学习智能体建模为环境状态的一部分并不是一种现实的方法。在本文中,我们在一个新颖的多智能体系统架构中结合了模块化方法、模糊逻辑和内部模型的优点。该架构基于一种模糊模块化方法,其规则库被划分为几个不同的模块。每个模块处理环境中的一个特定智能体,并将输入模糊集映射到动作Q值;这些分别代表每个学习模块的状态空间和动作空间。每个模块还使用一个内部模型表来估计其他智能体的动作。最后,我们研究了并行更新方法与所提出架构的集成。在一个著名的追踪领域的两种不同环境上获得的实验结果表明了所提出的多智能体架构和学习方法的有效性和鲁棒性。

相似文献

1
Modular fuzzy-reinforcement learning approach with internal model capabilities for multiagent systems.
IEEE Trans Syst Man Cybern B Cybern. 2004 Apr;34(2):1210-23. doi: 10.1109/tsmcb.2003.821869.
2
Fuzzy OLAP association rules mining-based modular reinforcement learning approach for multiagent systems.
IEEE Trans Syst Man Cybern B Cybern. 2005 Apr;35(2):326-38. doi: 10.1109/tsmcb.2004.843278.
3
Intelligent multiagent coordination based on reinforcement hierarchical neuro-fuzzy models.
Int J Neural Syst. 2014 Dec;24(8):1450031. doi: 10.1142/S0129065714500312. Epub 2014 Nov 18.
4
A parallel fuzzy inference model with distributed prediction scheme for reinforcement learning.
IEEE Trans Syst Man Cybern B Cybern. 1998;28(2):160-72. doi: 10.1109/3477.662757.
5
Reinforcement interval type-2 fuzzy controller design by online rule generation and q-value-aided ant colony optimization.
IEEE Trans Syst Man Cybern B Cybern. 2009 Dec;39(6):1528-42. doi: 10.1109/TSMCB.2009.2020569. Epub 2009 May 27.
6
A fuzzy adaptive learning control network with on-line structure and parameter learning.
Int J Neural Syst. 1996 Nov;7(5):569-90. doi: 10.1142/s0129065796000567.
7
Model learning and knowledge sharing for a multiagent system with Dyna-Q learning.
IEEE Trans Cybern. 2015 May;45(5):964-76. doi: 10.1109/TCYB.2014.2341582. Epub 2014 Aug 5.
8
eFSM--a novel online neural-fuzzy semantic memory model.
IEEE Trans Neural Netw. 2010 Jan;21(1):136-57. doi: 10.1109/TNN.2009.2035116. Epub 2009 Dec 11.
9
Multiagent Learning of Coordination in Loosely Coupled Multiagent Systems.
IEEE Trans Cybern. 2015 Dec;45(12):2853-67. doi: 10.1109/TCYB.2014.2387277. Epub 2015 Jan 13.
10
Genetic reinforcement learning through symbiotic evolution for fuzzy controller design.
IEEE Trans Syst Man Cybern B Cybern. 2000;30(2):290-302. doi: 10.1109/3477.836377.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验