Liang X B, Wang J
Department of Electrical and Computer Engineering, University of Delaware, Newark, DE 19716, USA.
IEEE Trans Neural Netw. 2000;11(6):1251-62. doi: 10.1109/72.883412.
This paper presents a continuous-time recurrent neural-network model for nonlinear optimization with any continuously differentiable objective function and bound constraints. Quadratic optimization with bound constraints is a special problem which can be solved by the recurrent neural network. The proposed recurrent neural network has the following characteristics. 1) It is regular in the sense that any optimum of the objective function with bound constraints is also an equilibrium point of the neural network. If the objective function to be minimized is convex, then the recurrent neural network is complete in the sense that the set of optima of the function with bound constraints coincides with the set of equilibria of the neural network. 2) The recurrent neural network is primal and quasiconvergent in the sense that its trajectory cannot escape from the feasible region and will converge to the set of equilibria of the neural network for any initial point in the feasible bound region. 3) The recurrent neural network has an attractivity property in the sense that its trajectory will eventually converge to the feasible region for any initial states even at outside of the bounded feasible region. 4) For minimizing any strictly convex quadratic objective function subject to bound constraints, the recurrent neural network is globally exponentially stable for almost any positive network parameters. Simulation results are given to demonstrate the convergence and performance of the proposed recurrent neural network for nonlinear optimization with bound constraints.
本文提出了一种连续时间递归神经网络模型,用于求解具有任意连续可微目标函数和边界约束的非线性优化问题。具有边界约束的二次优化是一个特殊问题,可通过递归神经网络求解。所提出的递归神经网络具有以下特点。1)从某种意义上说它是正则的,即目标函数在边界约束下的任何最优解也是神经网络的一个平衡点。如果要最小化的目标函数是凸函数,那么从某种意义上说递归神经网络是完备的,即具有边界约束的函数的最优解集与神经网络的平衡集重合。2)递归神经网络是原始的且拟收敛的,其意义在于它的轨迹不会逃离可行域,并且对于可行边界区域内的任何初始点都将收敛到神经网络的平衡集。3)递归神经网络具有吸引性,其意义在于即使对于有界可行域之外的任何初始状态,其轨迹最终也会收敛到可行域。4)对于在边界约束下最小化任何严格凸二次目标函数,对于几乎任何正的网络参数,递归神经网络都是全局指数稳定的。给出了仿真结果,以证明所提出的递归神经网络用于具有边界约束的非线性优化时的收敛性和性能。