Suppr超能文献

在竞争环境中分布式多智能体系统的弹性自主控制

Resilient Autonomous Control of Distributed Multiagent Systems in Contested Environments.

作者信息

Moghadam Rohollah, Modares Hamidreza

出版信息

IEEE Trans Cybern. 2019 Nov;49(11):3957-3967. doi: 10.1109/TCYB.2018.2856089. Epub 2018 Aug 17.

Abstract

An autonomous and resilient controller is proposed for leader-follower multiagent systems under uncertainties and cyber-physical attacks. The leader is assumed nonautonomous with a nonzero control input, which allows changing the team behavior or mission in response to the environmental changes. A resilient learning-based control protocol is presented to find optimal solutions to the synchronization problem in the presence of attacks and system dynamic uncertainties. An observer-based distributed H controller is first designed to prevent propagating the effects of attacks on sensors and actuators throughout the network, as well as to attenuate the effect of these attacks on the compromised agent itself. Nonhomogeneous game algebraic Riccati equations are derived to solve the H optimal synchronization problem and off-policy reinforcement learning (RL) is utilized to learn their solution without requiring any knowledge of the agent's dynamics. A trust-confidence-based distributed control protocol is then proposed to mitigate attacks that hijack the entire node and attacks on communication links. A confidence value is defined for each agent based solely on its local evidence. The proposed resilient RL algorithm employs the confidence value of each agent to indicate the trustworthiness of its own information and broadcast it to its neighbors to put weights on the data they receive from it during and after learning. If the confidence value of an agent is low, it employs a trust mechanism to identify compromised agents and remove the data it receives from them from the learning process. The simulation results are provided to show the effectiveness of the proposed approach.

摘要

针对存在不确定性和网络物理攻击的领导者-跟随者多智能体系统,提出了一种自主且具有弹性的控制器。假设领导者是非自主的,具有非零控制输入,这使得团队能够根据环境变化改变行为或任务。提出了一种基于弹性学习的控制协议,以在存在攻击和系统动态不确定性的情况下找到同步问题的最优解。首先设计了一种基于观测器的分布式H控制器,以防止攻击对传感器和执行器的影响在整个网络中传播,并减弱这些攻击对受攻击智能体自身的影响。推导了非齐次博弈代数黎卡提方程来解决H最优同步问题,并利用离策略强化学习(RL)来学习其解,而无需任何关于智能体动力学的知识。然后提出了一种基于信任-置信度的分布式控制协议,以减轻劫持整个节点的攻击和对通信链路的攻击。仅根据每个智能体的本地证据为其定义一个置信值。所提出的弹性RL算法利用每个智能体的置信值来指示其自身信息的可信度,并将其广播给邻居,以便在学习期间和学习之后对从邻居接收的数据加权。如果一个智能体的置信值较低,它将采用一种信任机制来识别受攻击的智能体,并在学习过程中从它们那里接收的数据中去除。提供了仿真结果以表明所提方法的有效性。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验