Zhao You, Liao Xiaofeng, He Xing
Key Laboratory of Dependable Services Computing in Cyber-Physical Society (Chongqing) Ministry of Education, College of Computer Science, Chongqing University, Chongqing, 400044, China.
College of Electronic and Information Engineering, Southwest University, Chongqing, 400715, China.
Neural Netw. 2022 Jun;150:336-349. doi: 10.1016/j.neunet.2022.03.011. Epub 2022 Mar 15.
Consider that the constrained convex optimization problems have emerged in a variety of scientific and engineering applications that often require efficient and fast solutions. Inspired by the Nesterov's accelerated method for solving unconstrained convex and strongly convex optimization problems, in this paper we propose two novel accelerated projection neurodynamic approaches for constrained smooth convex and strongly convex optimization based on the variational approach. First, for smooth, and convex optimization problems, a non-autonomous accelerated projection neurodynamic approach (NAAPNA) is presented and the existence, uniqueness and feasibility of the solution to it are analyzed rigorously. We provide that the NAAPNA has a convergence rate which is inversely proportional to the square of the running time. In addition, we present a novel autonomous accelerated projection neurodynamic approach (AAPNA) for addressing the constrained, smooth, strongly convex optimization problems and prove the existence, uniqueness to the strong global solution of AAPNA based on the Cauchy-Lipschitz-Picard theorem. Furthermore, we also prove the global convergence of AAPNA with different exponential convergence rates for different parameters. Compared with existing projection neurodynamic approaches based on the Brouwer's fixed point theorem, both NAAPNA and AAPNA use the projection operators of the auxiliary variable to map the primal variables to the constrained feasible region, thus our proposed neurodynamic approaches are easier to realize algorithm's acceleration. Finally, the effectiveness of NAAPNA and AAPNA is illustrated with several numerical examples.
考虑到约束凸优化问题出现在各种科学和工程应用中,这些应用通常需要高效快速的解决方案。受涅斯捷罗夫求解无约束凸和强凸优化问题的加速方法的启发,本文基于变分方法,针对约束光滑凸和强凸优化问题提出了两种新颖的加速投影神经动力学方法。首先,对于光滑凸优化问题,提出了一种非自治加速投影神经动力学方法(NAAPNA),并严格分析了其解的存在性、唯一性和可行性。我们证明NAAPNA的收敛速度与运行时间的平方成反比。此外,我们提出了一种新颖的自治加速投影神经动力学方法(AAPNA)来解决约束光滑强凸优化问题,并基于柯西 - 利普希茨 - 皮卡定理证明了AAPNA强全局解的存在性和唯一性。此外,我们还证明了AAPNA对于不同参数具有不同指数收敛速度的全局收敛性。与基于布劳威尔不动点定理的现有投影神经动力学方法相比,NAAPNA和AAPNA都使用辅助变量的投影算子将原始变量映射到约束可行域,因此我们提出的神经动力学方法更易于实现算法加速。最后通过几个数值例子说明了NAAPNA和AAPNA的有效性。