• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于故障/噪声注入的径向基函数(RBF)网络在线学习算法的收敛性和目标函数

Convergence and objective functions of some fault/noise-injection-based online learning algorithms for RBF networks.

作者信息

Ho Kevin I-J, Leung Chi-Sing, Sum John

机构信息

Department of Computer Science and Communication Engineering, Providence University, Sha-Lu 433, Taiwan.

出版信息

IEEE Trans Neural Netw. 2010 Jun;21(6):938-47. doi: 10.1109/TNN.2010.2046179. Epub 2010 Apr 12.

DOI:10.1109/TNN.2010.2046179
PMID:20388593
Abstract

In the last two decades, many online fault/noise injection algorithms have been developed to attain a fault tolerant neural network. However, not much theoretical works related to their convergence and objective functions have been reported. This paper studies six common fault/noise-injection-based online learning algorithms for radial basis function (RBF) networks, namely 1) injecting additive input noise, 2) injecting additive/multiplicative weight noise, 3) injecting multiplicative node noise, 4) injecting multiweight fault (random disconnection of weights), 5) injecting multinode fault during training, and 6) weight decay with injecting multinode fault. Based on the Gladyshev theorem, we show that the convergence of these six online algorithms is almost sure. Moreover, their true objective functions being minimized are derived. For injecting additive input noise during training, the objective function is identical to that of the Tikhonov regularizer approach. For injecting additive/multiplicative weight noise during training, the objective function is the simple mean square training error. Thus, injecting additive/multiplicative weight noise during training cannot improve the fault tolerance of an RBF network. Similar to injective additive input noise, the objective functions of other fault/noise-injection-based online algorithms contain a mean square error term and a specialized regularization term.

摘要

在过去二十年中,已经开发了许多在线故障/噪声注入算法来实现容错神经网络。然而,关于它们的收敛性和目标函数的理论研究报道不多。本文研究了六种基于故障/噪声注入的径向基函数(RBF)网络在线学习算法,即1)注入加性输入噪声,2)注入加性/乘性权重噪声,3)注入乘性节点噪声,4)注入多权重故障(权重随机断开),5)在训练期间注入多节点故障,以及6)通过注入多节点故障进行权重衰减。基于格拉德舍夫定理,我们证明这六种在线算法几乎必然收敛。此外,还推导了它们被最小化的真实目标函数。对于在训练期间注入加性输入噪声,目标函数与蒂霍诺夫正则化方法的目标函数相同。对于在训练期间注入加性/乘性权重噪声,目标函数是简单的均方训练误差。因此,在训练期间注入加性/乘性权重噪声并不能提高RBF网络的容错能力。与注入加性输入噪声类似,其他基于故障/噪声注入的在线算法的目标函数包含一个均方误差项和一个专门的正则化项。

相似文献

1
Convergence and objective functions of some fault/noise-injection-based online learning algorithms for RBF networks.基于故障/噪声注入的径向基函数(RBF)网络在线学习算法的收敛性和目标函数
IEEE Trans Neural Netw. 2010 Jun;21(6):938-47. doi: 10.1109/TNN.2010.2046179. Epub 2010 Apr 12.
2
Objective functions of online weight noise injection training algorithms for MLPs.用于多层感知器的在线权重噪声注入训练算法的目标函数。
IEEE Trans Neural Netw. 2011 Feb;22(2):317-23. doi: 10.1109/TNN.2010.2095881. Epub 2010 Dec 23.
3
A fault-tolerant regularizer for RBF networks.一种用于径向基函数网络的容错正则化器。
IEEE Trans Neural Netw. 2008 Mar;19(3):493-507. doi: 10.1109/TNN.2007.912320.
4
On objective function, regularizer, and prediction error of a learning algorithm for dealing with multiplicative weight noise.关于处理乘性权重噪声的学习算法的目标函数、正则化器和预测误差。
IEEE Trans Neural Netw. 2009 Jan;20(1):124-38. doi: 10.1109/TNN.2008.2005596. Epub 2008 Dec 22.
5
On-line node fault injection training algorithm for MLP networks: objective function and convergence analysis.MLP 网络在线节点故障注入训练算法:目标函数与收敛性分析。
IEEE Trans Neural Netw Learn Syst. 2012 Feb;23(2):211-22. doi: 10.1109/TNNLS.2011.2178477.
6
Regularization Effect of Random Node Fault/Noise on Gradient Descent Learning Algorithm.随机节点故障/噪声对梯度下降学习算法的正则化效应
IEEE Trans Neural Netw Learn Syst. 2023 May;34(5):2619-2632. doi: 10.1109/TNNLS.2021.3107051. Epub 2023 May 2.
7
A Regularizer Approach for RBF Networks Under the Concurrent Weight Failure Situation.在并发权重失效情况下 RBF 网络的正则化方法。
IEEE Trans Neural Netw Learn Syst. 2017 Jun;28(6):1360-1372. doi: 10.1109/TNNLS.2016.2536172. Epub 2016 Mar 28.
8
On the selection of weight decay parameter for faulty networks.关于故障网络权重衰减参数的选择
IEEE Trans Neural Netw. 2010 Aug;21(8):1232-44. doi: 10.1109/TNN.2010.2049580.
9
Blind equalization using a predictive radial basis function neural network.使用预测径向基函数神经网络的盲均衡
IEEE Trans Neural Netw. 2005 May;16(3):709-20. doi: 10.1109/TNN.2005.845145.
10
Generalized M-sparse algorithms for constructing fault tolerant RBF networks.广义 M-稀疏算法在构建容错 RBF 网络中的应用。
Neural Netw. 2024 Dec;180:106633. doi: 10.1016/j.neunet.2024.106633. Epub 2024 Aug 14.

引用本文的文献

1
The superior fault tolerance of artificial neural network training with a fault/noise injection-based genetic algorithm.基于故障/噪声注入的遗传算法进行人工神经网络训练时具有卓越的容错能力。
Protein Cell. 2016 Oct;7(10):735-748. doi: 10.1007/s13238-016-0302-5. Epub 2016 Aug 9.
2
Deterministic convergence of chaos injection-based gradient method for training feedforward neural networks.基于混沌注入的梯度法训练前馈神经网络的确定性收敛
Cogn Neurodyn. 2015 Jun;9(3):331-40. doi: 10.1007/s11571-014-9323-z. Epub 2015 Jan 1.