Long Jia, Yu Dengxiu, Wen Guoxing, Li Li, Wang Zhen, Chen C L Philip
IEEE Trans Neural Netw Learn Syst. 2022 Jun 3;PP. doi: 10.1109/TNNLS.2022.3177461.
In this article, the game-based backstepping control method is proposed for the high-order nonlinear multi-agent system with unknown dynamic and input saturation. Reinforcement learning (RL) is employed to get the saddle point solution of the tracking game between each agent and the reference signal for achieving robust control. Specifically, the approximate optimal solution of the established Hamilton-Jacobi-Isaacs (HJI) equation is obtained by policy iteration for each subsystem, and the single network adaptive critic (SNAC) architecture is used to reduce the computational burden. In addition, based on the separation operation of the error term from the derivative of the value function, we achieve the different proportions of the two agents in the game to realize the regulation of the final equilibrium point. Different from the general use of the neural network for system identification, the unknown nonlinear dynamic term is approximated based on the state difference obtained by the command filter. Furthermore, a sufficient condition is established to guarantee that the whole system and each subsystem included are uniformly ultimately bounded. Finally, simulation results are given to show the effectiveness of the proposed method.
本文针对具有未知动态和输入饱和的高阶非线性多智能体系统,提出了基于博弈的反步控制方法。采用强化学习(RL)来获得每个智能体与参考信号之间跟踪博弈的鞍点解,以实现鲁棒控制。具体而言,通过对每个子系统进行策略迭代,得到所建立的哈密顿 - 雅可比 - 伊萨克斯(HJI)方程的近似最优解,并采用单网络自适应评判器(SNAC)架构来减轻计算负担。此外,基于误差项与价值函数导数的分离运算,实现了博弈中两个智能体的不同比例,以实现对最终平衡点的调节。与一般使用神经网络进行系统辨识不同,未知非线性动态项基于指令滤波器获得的状态差进行近似。此外,建立了一个充分条件,以保证整个系统以及所包含的每个子系统都是一致最终有界的。最后,给出了仿真结果以验证所提方法的有效性。