• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

具有硬限幅激活函数的有限时间收敛递归神经网络用于分段线性目标函数的约束优化

Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions.

作者信息

Liu Qingshan, Wang Jun

机构信息

School of Automation, Southeast University, Nanjing 210096, China.

出版信息

IEEE Trans Neural Netw. 2011 Apr;22(4):601-13. doi: 10.1109/TNN.2011.2104979. Epub 2011 Mar 10.

DOI:10.1109/TNN.2011.2104979
PMID:21402513
Abstract

This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.

摘要

本文提出了一种单层递归神经网络,用于求解一类具有分段线性目标函数的约束非光滑优化问题。在对模型中单个增益参数的推导下界的温和条件下,所提出的神经网络保证能在有限时间内全局收敛到最优解。神经网络中的神经元数量与优化问题的决策变量数量相同。与现有的用于优化的神经网络相比,所提出的神经网络具有一些显著特征,如有限时间收敛和低模型复杂度。还给出了两个重要特殊情况,即线性规划和非光滑优化的具体模型。此外,讨论了该神经网络在最短路径问题和约束最小绝对偏差问题上的应用,并给出了仿真结果,以证明所提出神经网络的有效性和特性。

相似文献

1
Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions.具有硬限幅激活函数的有限时间收敛递归神经网络用于分段线性目标函数的约束优化
IEEE Trans Neural Netw. 2011 Apr;22(4):601-13. doi: 10.1109/TNN.2011.2104979. Epub 2011 Mar 10.
2
A one-layer recurrent neural network for constrained nonsmooth optimization.用于约束非光滑优化的单层递归神经网络。
IEEE Trans Syst Man Cybern B Cybern. 2011 Oct;41(5):1323-33. doi: 10.1109/TSMCB.2011.2140395. Epub 2011 May 2.
3
A one-layer projection neural network for nonsmooth optimization subject to linear equalities and bound constraints.单层投影神经网络用于求解带线性等式和边界约束的非光滑优化问题。
IEEE Trans Neural Netw Learn Syst. 2013 May;24(5):812-24. doi: 10.1109/TNNLS.2013.2244908.
4
A one-layer recurrent neural network for constrained nonsmooth invex optimization.用于约束非光滑不变凸优化的单层递归神经网络。
Neural Netw. 2014 Feb;50:79-89. doi: 10.1016/j.neunet.2013.11.007. Epub 2013 Nov 19.
5
A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization.单层循环神经网络在约束拟凸优化中的应用及其在动态投资组合优化中的应用。
Neural Netw. 2012 Feb;26:99-109. doi: 10.1016/j.neunet.2011.09.001. Epub 2011 Sep 16.
6
A novel recurrent neural network with finite-time convergence for linear programming.一种具有有限时间收敛性的新型循环神经网络用于线性规划。
Neural Comput. 2010 Nov;22(11):2962-78. doi: 10.1162/NECO_a_00029.
7
A one-layer recurrent neural network with a discontinuous hard-limiting activation function for quadratic programming.一种具有用于二次规划的不连续硬限幅激活函数的单层递归神经网络。
IEEE Trans Neural Netw. 2008 Apr;19(4):558-70. doi: 10.1109/TNN.2007.910736.
8
A novel recurrent neural network for solving nonlinear optimization problems with inequality constraints.一种用于求解具有不等式约束的非线性优化问题的新型递归神经网络。
IEEE Trans Neural Netw. 2008 Aug;19(8):1340-53. doi: 10.1109/TNN.2008.2000273.
9
Neural network for nonsmooth, nonconvex constrained minimization via smooth approximation.通过光滑逼近进行非光滑、非凸约束最小化的神经网络。
IEEE Trans Neural Netw Learn Syst. 2014 Mar;25(3):545-56. doi: 10.1109/TNNLS.2013.2278427.
10
Neural network for constrained nonsmooth optimization using Tikhonov regularization.基于 Tikhonov 正则化的约束非光滑优化神经网络。
Neural Netw. 2015 Mar;63:272-81. doi: 10.1016/j.neunet.2014.12.007. Epub 2014 Dec 31.

引用本文的文献

1
IV-GNN : interval valued data handling using graph neural network.IV-GNN:使用图神经网络处理区间值数据
Appl Intell (Dordr). 2023;53(5):5697-5713. doi: 10.1007/s10489-022-03780-1. Epub 2022 Jul 1.
2
The general critical analysis for continuous-time UPPAM recurrent neural networks.连续时间UPPAM递归神经网络的一般批判性分析。
Neurocomputing (Amst). 2016 Jan 29;175(Pt A):40-46. doi: 10.1016/j.neucom.2015.09.103.
3
Convergence and rate analysis of neural networks for sparse approximation.神经网络在稀疏逼近中的收敛性和速率分析。
IEEE Trans Neural Netw Learn Syst. 2012 Sep;23(9):1377-89. doi: 10.1109/TNNLS.2012.2202400. Epub 2012 Jun 28.