• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于多层感知器的在线权重噪声注入训练算法的目标函数。

Objective functions of online weight noise injection training algorithms for MLPs.

作者信息

Ho Kevin, Leung Chi-Sing, Sum John

机构信息

Department of Computer Science and Communication Engineering, Providence University, Taichung 43301, Taiwan.

出版信息

IEEE Trans Neural Netw. 2011 Feb;22(2):317-23. doi: 10.1109/TNN.2010.2095881. Epub 2010 Dec 23.

DOI:10.1109/TNN.2010.2095881
PMID:21189237
Abstract

Injecting weight noise during training has been a simple strategy to improve the fault tolerance of multilayer perceptrons (MLPs) for almost two decades, and several online training algorithms have been proposed in this regard. However, there are some misconceptions about the objective functions being minimized by these algorithms. Some existing results misinterpret that the prediction error of a trained MLP affected by weight noise is equivalent to the objective function of a weight noise injection algorithm. In this brief, we would like to clarify these misconceptions. Two weight noise injection scenarios will be considered: one is based on additive weight noise injection and the other is based on multiplicative weight noise injection. To avoid the misconceptions, we use their mean updating equations to analyze the objective functions. For injecting additive weight noise during training, we show that the true objective function is identical to the prediction error of a faulty MLP whose weights are affected by additive weight noise. It consists of the conventional mean square error and a smoothing regularizer. For injecting multiplicative weight noise during training, we show that the objective function is different from the prediction error of a faulty MLP whose weights are affected by multiplicative weight noise. With our results, some existing misconceptions regarding MLP training with weight noise injection can now be resolved.

摘要

在训练过程中注入权重噪声一直是一种提高多层感知器(MLP)容错能力的简单策略,近二十年来,人们针对这方面提出了几种在线训练算法。然而,对于这些算法所最小化的目标函数存在一些误解。一些现有结果错误地认为,受权重噪声影响的训练后的MLP的预测误差等同于权重噪声注入算法的目标函数。在本简报中,我们希望澄清这些误解。我们将考虑两种权重噪声注入场景:一种基于加性权重噪声注入,另一种基于乘性权重噪声注入。为避免误解,我们使用它们的均值更新方程来分析目标函数。对于在训练过程中注入加性权重噪声的情况,我们表明真实的目标函数与权重受加性权重噪声影响的有故障的MLP的预测误差相同。它由传统的均方误差和平滑正则化项组成。对于在训练过程中注入乘性权重噪声的情况,我们表明目标函数与权重受乘性权重噪声影响的有故障的MLP的预测误差不同。基于我们的结果,现在可以解决一些关于使用权重噪声注入进行MLP训练的现有误解。

相似文献

1
Objective functions of online weight noise injection training algorithms for MLPs.用于多层感知器的在线权重噪声注入训练算法的目标函数。
IEEE Trans Neural Netw. 2011 Feb;22(2):317-23. doi: 10.1109/TNN.2010.2095881. Epub 2010 Dec 23.
2
Convergence and objective functions of some fault/noise-injection-based online learning algorithms for RBF networks.基于故障/噪声注入的径向基函数(RBF)网络在线学习算法的收敛性和目标函数
IEEE Trans Neural Netw. 2010 Jun;21(6):938-47. doi: 10.1109/TNN.2010.2046179. Epub 2010 Apr 12.
3
On-line node fault injection training algorithm for MLP networks: objective function and convergence analysis.MLP 网络在线节点故障注入训练算法:目标函数与收敛性分析。
IEEE Trans Neural Netw Learn Syst. 2012 Feb;23(2):211-22. doi: 10.1109/TNNLS.2011.2178477.
4
On objective function, regularizer, and prediction error of a learning algorithm for dealing with multiplicative weight noise.关于处理乘性权重噪声的学习算法的目标函数、正则化器和预测误差。
IEEE Trans Neural Netw. 2009 Jan;20(1):124-38. doi: 10.1109/TNN.2008.2005596. Epub 2008 Dec 22.
5
On the selection of weight decay parameter for faulty networks.关于故障网络权重衰减参数的选择
IEEE Trans Neural Netw. 2010 Aug;21(8):1232-44. doi: 10.1109/TNN.2010.2049580.
6
Avoiding overfitting in multilayer perceptrons with feeling-of-knowing using self-organizing maps.使用自组织映射在具有知晓感的多层感知器中避免过拟合。
Biosystems. 2005 Apr;80(1):37-40. doi: 10.1016/j.biosystems.2004.09.031. Epub 2004 Nov 2.
7
Convergence analyses on on-line weight noise injection-based training algorithms for MLPs.在线权重噪声注入式训练算法在 MLPs 上的收敛性分析。
IEEE Trans Neural Netw Learn Syst. 2012 Nov;23(11):1827-40. doi: 10.1109/TNNLS.2012.2210243.
8
Feature selection in MLPs and SVMs based on maximum output information.基于最大输出信息的多层感知器和支持向量机中的特征选择
IEEE Trans Neural Netw. 2004 Jul;15(4):937-48. doi: 10.1109/TNN.2004.828772.
9
Learning associative memories by error backpropagation.通过误差反向传播学习关联记忆。
IEEE Trans Neural Netw. 2011 Mar;22(3):347-55. doi: 10.1109/TNN.2010.2099239. Epub 2010 Dec 23.
10
Modular network SOM.模块化网络自组织映射
Neural Netw. 2009 Jan;22(1):82-90. doi: 10.1016/j.neunet.2008.10.006. Epub 2008 Nov 6.