Yang Xindi, Zhang Hao, Wang Zhuping
IEEE Trans Neural Netw Learn Syst. 2022 Aug;33(8):3872-3883. doi: 10.1109/TNNLS.2021.3054685. Epub 2022 Aug 3.
This article investigates the optimally distributed consensus control problem for discrete-time multiagent systems with completely unknown dynamics and computational ability differences. The problem can be viewed as solving nonzero-sum games with distributed reinforcement learning (RL), and each agent is a player in these games. First, to guarantee the real-time performance of learning algorithms, a data-based distributed control algorithm is proposed for multiagent systems using offline system interaction data sets. By utilizing the interactive data produced during the run of a real-time system, the proposed algorithm improves system performance based on distributed policy gradient RL. The convergence and stability are guaranteed based on functional analysis and the Lyapunov method. Second, to address asynchronous learning caused by computational ability differences in multiagent systems, the proposed algorithm is extended to an asynchronous version in which executing policy improvement or not of each agent is independent of its neighbors. Furthermore, an actor-critic structure, which contains two neural networks, is developed to implement the proposed algorithm in synchronous and asynchronous cases. Based on the method of weighted residuals, the convergence and optimality of the neural networks are guaranteed by proving the approximation errors converge to zero. Finally, simulations are conducted to show the effectiveness of the proposed algorithm.
本文研究了具有完全未知动力学和计算能力差异的离散时间多智能体系统的最优分布式一致性控制问题。该问题可视为通过分布式强化学习(RL)求解非零和博弈,且每个智能体都是这些博弈中的参与者。首先,为保证学习算法的实时性能,针对多智能体系统,利用离线系统交互数据集提出了一种基于数据的分布式控制算法。通过利用实时系统运行过程中产生的交互数据,该算法基于分布式策略梯度RL提高了系统性能。基于泛函分析和李雅普诺夫方法保证了收敛性和稳定性。其次,为解决多智能体系统中计算能力差异导致的异步学习问题,将所提算法扩展为异步版本,其中每个智能体是否执行策略改进与其邻居无关。此外,还开发了一种包含两个神经网络的智能体-评论家结构,以在同步和异步情况下实现所提算法。基于加权残差法,通过证明近似误差收敛到零,保证了神经网络的收敛性和最优性。最后,进行了仿真以验证所提算法的有效性。