Suppr超能文献

一种使用深度强化学习改进多智能体追逃博弈决策的方法。

An Improved Approach towards Multi-Agent Pursuit-Evasion Game Decision-Making Using Deep Reinforcement Learning.

作者信息

Wan Kaifang, Wu Dingwei, Zhai Yiwei, Li Bo, Gao Xiaoguang, Hu Zijian

机构信息

School of Electronics and Information, Northwestern Polytechnical University, Xi'an 710072, China.

出版信息

Entropy (Basel). 2021 Oct 29;23(11):1433. doi: 10.3390/e23111433.

Abstract

A pursuit-evasion game is a classical maneuver confrontation problem in the multi-agent systems (MASs) domain. An online decision technique based on deep reinforcement learning (DRL) was developed in this paper to address the problem of environment sensing and decision-making in pursuit-evasion games. A control-oriented framework developed from the DRL-based multi-agent deep deterministic policy gradient (MADDPG) algorithm was built to implement multi-agent cooperative decision-making to overcome the limitation of the tedious state variables required for the traditionally complicated modeling process. To address the effects of errors between a model and a real scenario, this paper introduces adversarial disturbances. It also proposes a novel adversarial attack trick and adversarial learning MADDPG (A2-MADDPG) algorithm. By introducing an adversarial attack trick for the agents themselves, uncertainties of the real world are modeled, thereby optimizing robust training. During the training process, adversarial learning was incorporated into our algorithm to preprocess the actions of multiple agents, which enabled them to properly respond to uncertain dynamic changes in MASs. Experimental results verified that the proposed approach provides superior performance and effectiveness for pursuers and evaders, and both can learn the corresponding confrontational strategy during training.

摘要

追逃博弈是多智能体系统(MASs)领域中的一个经典机动对抗问题。本文开发了一种基于深度强化学习(DRL)的在线决策技术,以解决追逃博弈中的环境感知和决策问题。基于基于DRL的多智能体深度确定性策略梯度(MADDPG)算法构建了一个面向控制的框架,以实现多智能体协作决策,克服传统复杂建模过程所需的繁琐状态变量的局限性。为了解决模型与实际场景之间误差的影响,本文引入了对抗干扰。还提出了一种新颖的对抗攻击技巧和对抗学习MADDPG(A2-MADDPG)算法。通过为智能体自身引入对抗攻击技巧,对现实世界的不确定性进行建模,从而优化鲁棒训练。在训练过程中,将对抗学习纳入算法,对多个智能体的动作进行预处理,使它们能够正确应对MASs中不确定的动态变化。实验结果验证了所提方法为追捕者和逃避者提供了卓越的性能和有效性,并且两者都能在训练过程中学习到相应的对抗策略。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1736/8625563/105d0a0799d4/entropy-23-01433-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验