Xiao Lin, Jia Lei, Dai Jianhua, Cao Yingkun, Li Yiwei, Zhu Quanxin, Li Jichun, Liu Min
IEEE Trans Neural Netw Learn Syst. 2024 Mar;35(3):3478-3487. doi: 10.1109/TNNLS.2022.3193429. Epub 2024 Feb 29.
In this article, a novel distributed gradient neural network (DGNN) with predefined-time convergence (PTC) is proposed to solve consensus problems widely existing in multiagent systems (MASs). Compared with previous gradient neural networks (GNNs) for optimization and computation, the proposed DGNN model works in a nonfully connected way, in which each neuron only needs the information of neighbor neurons to converge to the equilibrium point. The convergence and asymptotic stability of the DGNN model are proved according to the Lyapunov theory. In addition, based on a relatively loose condition, three novel nonlinear activation functions are designed to speedup the DGNN model to PTC, which is proved by rigorous theory. Computer numerical results further verify the effectiveness, especially the PTC, of the proposed nonlinearly activated DGNN model to solve various consensus problems of MASs. Finally, a practical case of the directional consensus is presented to show the feasibility of the DGNN model and a corresponding connectivity-testing example is given to verify the influence on the convergence speed.
在本文中,提出了一种具有预定义时间收敛(PTC)的新型分布式梯度神经网络(DGNN),以解决多智能体系统(MAS)中广泛存在的一致性问题。与先前用于优化和计算的梯度神经网络(GNN)相比,所提出的DGNN模型以非全连接方式工作,其中每个神经元仅需要邻居神经元的信息即可收敛到平衡点。根据李雅普诺夫理论证明了DGNN模型的收敛性和渐近稳定性。此外,基于一个相对宽松的条件,设计了三种新型非线性激活函数,以加速DGNN模型实现PTC,这一点得到了严格理论的证明。计算机数值结果进一步验证了所提出的非线性激活DGNN模型解决MAS各种一致性问题的有效性,尤其是PTC。最后,给出了一个方向一致性的实际案例,以展示DGNN模型的可行性,并给出了一个相应的连通性测试示例,以验证对收敛速度的影响。