• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

随机离散时间 PCA 神经网络算法的不发散性。

Non-divergence of stochastic discrete time algorithms for PCA neural networks.

出版信息

IEEE Trans Neural Netw Learn Syst. 2015 Feb;26(2):394-9. doi: 10.1109/TNNLS.2014.2312421.

DOI:10.1109/TNNLS.2014.2312421
PMID:25608296
Abstract

Learning algorithms play an important role in the practical application of neural networks based on principal component analysis, often determining the success, or otherwise, of these applications. These algorithms cannot be divergent, but it is very difficult to directly study their convergence properties, because they are described by stochastic discrete time (SDT) algorithms. This brief analyzes the original SDT algorithms directly, and derives some invariant sets that guarantee the nondivergence of these algorithms in a stochastic environment by selecting proper learning parameters. Our theoretical results are verified by a series of simulation examples.

摘要

学习算法在基于主成分分析的神经网络的实际应用中起着重要作用,往往决定了这些应用的成败。这些算法不能发散,但很难直接研究它们的收敛性质,因为它们是由随机离散时间(SDT)算法描述的。本研究直接分析原始的 SDT 算法,并通过选择合适的学习参数,推导出一些不变集,在随机环境中保证这些算法的不发散性。我们的理论结果通过一系列模拟实例得到了验证。

相似文献

1
Non-divergence of stochastic discrete time algorithms for PCA neural networks.随机离散时间 PCA 神经网络算法的不发散性。
IEEE Trans Neural Netw Learn Syst. 2015 Feb;26(2):394-9. doi: 10.1109/TNNLS.2014.2312421.
2
Convergence analysis of a deterministic discrete time system of Oja's PCA learning algorithm.奥雅主成分分析(PCA)学习算法确定性离散时间系统的收敛性分析
IEEE Trans Neural Netw. 2005 Nov;16(6):1318-28. doi: 10.1109/TNN.2005.852236.
3
Convergence analysis of a simple minor component analysis algorithm.一种简单的次要成分分析算法的收敛性分析
Neural Netw. 2007 Sep;20(7):842-50. doi: 10.1016/j.neunet.2007.07.001. Epub 2007 Jul 21.
4
Asymptotic stability for neural networks with mixed time-delays: the discrete-time case.具有混合时滞神经网络的渐近稳定性:离散时间情形
Neural Netw. 2009 Jan;22(1):67-74. doi: 10.1016/j.neunet.2008.10.001. Epub 2008 Oct 18.
5
Stability analysis of time-delay neural networks subject to stochastic perturbations.时滞神经网络受随机扰动的稳定性分析。
IEEE Trans Cybern. 2013 Dec;43(6):2122-34. doi: 10.1109/TCYB.2013.2240451.
6
Convergence analysis of deterministic discrete time system of a unified self-stabilizing algorithm for PCA and MCA.确定性离散时间系统的收敛分析,用于 PCA 和 MCA 的统一自稳定算法。
Neural Netw. 2012 Dec;36:64-72. doi: 10.1016/j.neunet.2012.08.016. Epub 2012 Sep 17.
7
Propagation and control of stochastic signals through universal learning networks.随机信号通过通用学习网络的传播与控制。
Neural Netw. 2006 May;19(4):487-99. doi: 10.1016/j.neunet.2005.10.005. Epub 2006 Jan 18.
8
Robust stability criterion for discrete-time uncertain Markovian jumping neural networks with defective statistics of modes transitions.
IEEE Trans Neural Netw. 2011 Jan;22(1):164-70. doi: 10.1109/TNN.2010.2093151. Epub 2010 Dec 3.
9
Stability of Cohen-Grossberg neural networks with time-varying delays.具有时变延迟的Cohen-Grossberg神经网络的稳定性
Neural Netw. 2007 Oct;20(8):868-73. doi: 10.1016/j.neunet.2007.07.005. Epub 2007 Jul 28.
10
Stochastic variance models in discrete time with feedforward neural networks.
Neural Comput. 2009 Jul;21(7):1990-2008. doi: 10.1162/neco.2009.11-07-642.