• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

MLP 网络在线节点故障注入训练算法:目标函数与收敛性分析。

On-line node fault injection training algorithm for MLP networks: objective function and convergence analysis.

出版信息

IEEE Trans Neural Netw Learn Syst. 2012 Feb;23(2):211-22. doi: 10.1109/TNNLS.2011.2178477.

DOI:10.1109/TNNLS.2011.2178477
PMID:24808501
Abstract

Improving fault tolerance of a neural network has been studied for more than two decades. Various training algorithms have been proposed in sequel. The on-line node fault injection-based algorithm is one of these algorithms, in which hidden nodes randomly output zeros during training. While the idea is simple, theoretical analyses on this algorithm are far from complete. This paper presents its objective function and the convergence proof. We consider three cases for multilayer perceptrons (MLPs). They are: (1) MLPs with single linear output node; (2) MLPs with multiple linear output nodes; and (3) MLPs with single sigmoid output node. For the convergence proof, we show that the algorithm converges with probability one. For the objective function, we show that the corresponding objective functions of cases (1) and (2) are of the same form. They both consist of a mean square errors term, a regularizer term, and a weight decay term. For case (3), the objective function is slight different from that of cases (1) and (2). With the objective functions derived, we can compare the similarities and differences among various algorithms and various cases.

摘要

二十多年来,人们一直在研究提高神经网络的容错能力。随后提出了各种训练算法。基于在线节点故障注入的算法就是其中之一,在该算法中,隐藏节点在训练过程中随机输出零。虽然这个想法很简单,但对该算法的理论分析还远远不够。本文提出了它的目标函数和收敛证明。我们考虑了三种多层感知器(MLP)的情况:(1)具有单个线性输出节点的 MLP;(2)具有多个线性输出节点的 MLP;以及(3)具有单个 Sigmoid 输出节点的 MLP。对于收敛证明,我们表明该算法以概率 1 收敛。对于目标函数,我们表明情况(1)和(2)的相应目标函数具有相同的形式。它们都由均方误差项、正则化项和权重衰减项组成。对于情况(3),目标函数与情况(1)和(2)略有不同。有了推导出来的目标函数,我们就可以比较各种算法和各种情况下的相似点和不同点。

相似文献

1
On-line node fault injection training algorithm for MLP networks: objective function and convergence analysis.MLP 网络在线节点故障注入训练算法:目标函数与收敛性分析。
IEEE Trans Neural Netw Learn Syst. 2012 Feb;23(2):211-22. doi: 10.1109/TNNLS.2011.2178477.
2
Objective functions of online weight noise injection training algorithms for MLPs.用于多层感知器的在线权重噪声注入训练算法的目标函数。
IEEE Trans Neural Netw. 2011 Feb;22(2):317-23. doi: 10.1109/TNN.2010.2095881. Epub 2010 Dec 23.
3
Convergence and objective functions of some fault/noise-injection-based online learning algorithms for RBF networks.基于故障/噪声注入的径向基函数(RBF)网络在线学习算法的收敛性和目标函数
IEEE Trans Neural Netw. 2010 Jun;21(6):938-47. doi: 10.1109/TNN.2010.2046179. Epub 2010 Apr 12.
4
Convergence analyses on on-line weight noise injection-based training algorithms for MLPs.在线权重噪声注入式训练算法在 MLPs 上的收敛性分析。
IEEE Trans Neural Netw Learn Syst. 2012 Nov;23(11):1827-40. doi: 10.1109/TNNLS.2012.2210243.
5
A fault-tolerant regularizer for RBF networks.一种用于径向基函数网络的容错正则化器。
IEEE Trans Neural Netw. 2008 Mar;19(3):493-507. doi: 10.1109/TNN.2007.912320.
6
Global convergence of online BP training with dynamic learning rate.在线 BP 训练的动态学习率全局收敛。
IEEE Trans Neural Netw Learn Syst. 2012 Feb;23(2):330-41. doi: 10.1109/TNNLS.2011.2178315.
7
Novel maximum-margin training algorithms for supervised neural networks.用于监督神经网络的新型最大间隔训练算法。
IEEE Trans Neural Netw. 2010 Jun;21(6):972-84. doi: 10.1109/TNN.2010.2046423. Epub 2010 Apr 19.
8
Avoiding overfitting in multilayer perceptrons with feeling-of-knowing using self-organizing maps.使用自组织映射在具有知晓感的多层感知器中避免过拟合。
Biosystems. 2005 Apr;80(1):37-40. doi: 10.1016/j.biosystems.2004.09.031. Epub 2004 Nov 2.
9
Weight Noise Injection-Based MLPs With Group Lasso Penalty: Asymptotic Convergence and Application to Node Pruning.基于权重噪声注入的带组套索惩罚的 MLPs:渐进收敛及其在节点剪枝中的应用。
IEEE Trans Cybern. 2019 Dec;49(12):4346-4364. doi: 10.1109/TCYB.2018.2864142. Epub 2018 Dec 5.
10
On objective function, regularizer, and prediction error of a learning algorithm for dealing with multiplicative weight noise.关于处理乘性权重噪声的学习算法的目标函数、正则化器和预测误差。
IEEE Trans Neural Netw. 2009 Jan;20(1):124-38. doi: 10.1109/TNN.2008.2005596. Epub 2008 Dec 22.

引用本文的文献

1
Deterministic convergence of chaos injection-based gradient method for training feedforward neural networks.基于混沌注入的梯度法训练前馈神经网络的确定性收敛
Cogn Neurodyn. 2015 Jun;9(3):331-40. doi: 10.1007/s11571-014-9323-z. Epub 2015 Jan 1.