Department of Mathematics, Harbin Institute of Technology, Weihai, 264209, China.
Department of Mathematics, Harbin Institute of Technology, Harbin, 150001, China.
Neural Netw. 2022 Feb;146:161-173. doi: 10.1016/j.neunet.2021.11.013. Epub 2021 Nov 16.
Based on the theories of inertial systems, a second-order accelerated neurodynamic approach is designed to solve a distributed convex optimization with inequality and set constraints. Most of the existing approaches for distributed convex optimization problems are usually first-order ones, and it is usually hard to analyze the convergence rate for the state solution of those first-order approaches. Due to the control design for the acceleration, the second-order neurodynamic approaches can often achieve faster convergence rate. Moreover, the existing second-order approaches are mostly designed to solve unconstrained distributed convex optimization problems, and are not suitable for solving constrained distributed convex optimization problems. It is acquired that the state solution of the designed neurodynamic approach in this paper converges to the optimal solution of the considered distributed convex optimization problem. An error function which demonstrates the performance of the designed neurodynamic approach, has a superquadratic convergence. Several numerical examples are provided to show the effectiveness of the presented second-order accelerated neurodynamic approach.
基于惯性系统理论,设计了一种二阶加速神经动力学方法来解决具有不等式和集合约束的分布式凸优化问题。大多数现有的分布式凸优化问题的方法通常是一阶的,并且通常很难分析这些一阶方法的状态解的收敛速度。由于加速度的控制设计,二阶神经动力学方法通常可以实现更快的收敛速度。此外,现有的二阶方法大多被设计用于解决无约束的分布式凸优化问题,而不适合解决约束的分布式凸优化问题。本文设计的神经动力学方法的状态解收敛于所考虑的分布式凸优化问题的最优解。一个展示设计的神经动力学方法性能的误差函数,具有超二次收敛性。提供了几个数值例子来说明所提出的二阶加速神经动力学方法的有效性。