Liu Qingshan, Wang Jun
School of Automation, Southeast University, Nanjing 210096, China.
IEEE Trans Neural Netw. 2011 Apr;22(4):601-13. doi: 10.1109/TNN.2011.2104979. Epub 2011 Mar 10.
This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.
本文提出了一种单层递归神经网络,用于求解一类具有分段线性目标函数的约束非光滑优化问题。在对模型中单个增益参数的推导下界的温和条件下,所提出的神经网络保证能在有限时间内全局收敛到最优解。神经网络中的神经元数量与优化问题的决策变量数量相同。与现有的用于优化的神经网络相比,所提出的神经网络具有一些显著特征,如有限时间收敛和低模型复杂度。还给出了两个重要特殊情况,即线性规划和非光滑优化的具体模型。此外,讨论了该神经网络在最短路径问题和约束最小绝对偏差问题上的应用,并给出了仿真结果,以证明所提出神经网络的有效性和特性。