Suppr超能文献

一种用于求解受线性约束的非线性凸规划的递归神经网络。

A recurrent neural network for solving nonlinear convex programs subject to linear constraints.

作者信息

Xia Youshen, Wang Jun

机构信息

Department of Applied Mathematics, Nanjing University of Posts and Telecommunications, Nanjing 210003, China.

出版信息

IEEE Trans Neural Netw. 2005 Mar;16(2):379-86. doi: 10.1109/tnn.2004.841779.

Abstract

In this paper, we propose a recurrent neural network for solving nonlinear convex programming problems with linear constraints. The proposed neural network has a simpler structure and a lower complexity for implementation than the existing neural networks for solving such problems. It is shown here that the proposed neural network is stable in the sense of Lyapunov and globally convergent to an optimal solution within a finite time under the condition that the objective function is strictly convex. Compared with the existing convergence results, the present results do not require Lipschitz continuity condition on the objective function. Finally, examples are provided to show the applicability of the proposed neural network.

摘要

在本文中,我们提出了一种用于求解具有线性约束的非线性凸规划问题的递归神经网络。与现有的用于解决此类问题的神经网络相比,所提出的神经网络具有更简单的结构和更低的实现复杂度。本文表明,在目标函数严格凸的条件下,所提出的神经网络在李雅普诺夫意义下是稳定的,并且在有限时间内全局收敛到最优解。与现有的收敛结果相比,目前的结果不需要目标函数满足利普希茨连续性条件。最后,通过实例说明了所提出的神经网络的适用性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验