IEEE Trans Neural Netw Learn Syst. 2018 Jun;29(6):2139-2153. doi: 10.1109/TNNLS.2018.2803059.
This paper develops optimal control protocols for the distributed output synchronization problem of leader-follower multiagent systems with an active leader. Agents are assumed to be heterogeneous with different dynamics and dimensions. The desired trajectory is assumed to be preplanned and is generated by the leader. Other follower agents autonomously synchronize to the leader by interacting with each other using a communication network. The leader is assumed to be active in the sense that it has a nonzero control input so that it can act independently and update its control to keep the followers away from possible danger. A distributed observer is first designed to estimate the leader's state and generate the reference signal for each follower. Then, the output synchronization of leader-follower systems with an active leader is formulated as a distributed optimal tracking problem, and inhomogeneous algebraic Riccati equations (AREs) are derived to solve it. The resulting distributed optimal control protocols not only minimize the steady-state error but also optimize the transient response of the agents. An off-policy reinforcement learning algorithm is developed to solve the inhomogeneous AREs online in real time and without requiring any knowledge of the agents' dynamics. Finally, two simulation examples are conducted to illustrate the effectiveness of the proposed algorithm.
本文针对具有主动领导者的领导者-跟随者多智能体系统的分布式输出同步问题,开发了最优控制协议。假设智能体具有不同的动力学和维度,是异构的。假设期望轨迹是预先规划的,并由领导者生成。其他跟随者通过使用通信网络相互作用来自主地与领导者同步。领导者被假设为主动的,因为它具有非零控制输入,以便它可以独立行动并更新其控制,使跟随者远离可能的危险。首先设计了一个分布式观测器来估计领导者的状态并为每个跟随者生成参考信号。然后,将具有主动领导者的领导者-跟随者系统的输出同步问题表述为分布式最优跟踪问题,并推导出非齐次代数黎卡提方程(ARE)来解决它。所得到的分布式最优控制协议不仅最小化了稳态误差,而且优化了智能体的瞬态响应。开发了一种离线强化学习算法来实时在线解决非齐次 ARE,而无需任何关于智能体动力学的知识。最后,进行了两个仿真示例来说明所提出算法的有效性。