Li Guocheng, Yan Zheng, Wang Jun
Department of Mathematics, Beijing Information Science and Technology University, Beijing, China.
Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong.
Neural Netw. 2014 Feb;50:79-89. doi: 10.1016/j.neunet.2013.11.007. Epub 2013 Nov 19.
Invexity is an important notion in nonconvex optimization. In this paper, a one-layer recurrent neural network is proposed for solving constrained nonsmooth invex optimization problems, designed based on an exact penalty function method. It is proved herein that any state of the proposed neural network is globally convergent to the optimal solution set of constrained invex optimization problems, with a sufficiently large penalty parameter. In addition, any neural state is globally convergent to the unique optimal solution, provided that the objective function and constraint functions are pseudoconvex. Moreover, any neural state is globally convergent to the feasible region in finite time and stays there thereafter. The lower bounds of the penalty parameter and convergence time are also estimated. Two numerical examples are provided to illustrate the performances of the proposed neural network.
不变凸性是非凸优化中的一个重要概念。本文基于精确罚函数法,提出了一种用于求解约束非光滑不变凸优化问题的单层递归神经网络。本文证明,对于足够大的罚参数,所提出神经网络的任何状态都全局收敛到约束不变凸优化问题的最优解集。此外,如果目标函数和约束函数是伪凸的,则任何神经状态都全局收敛到唯一最优解。而且,任何神经状态都在有限时间内全局收敛到可行域,并在之后保持在该可行域内。还估计了罚参数和收敛时间的下界。给出了两个数值例子来说明所提出神经网络的性能。