Suppr超能文献

基于故障/噪声注入的径向基函数(RBF)网络在线学习算法的收敛性和目标函数

Convergence and objective functions of some fault/noise-injection-based online learning algorithms for RBF networks.

作者信息

Ho Kevin I-J, Leung Chi-Sing, Sum John

机构信息

Department of Computer Science and Communication Engineering, Providence University, Sha-Lu 433, Taiwan.

出版信息

IEEE Trans Neural Netw. 2010 Jun;21(6):938-47. doi: 10.1109/TNN.2010.2046179. Epub 2010 Apr 12.

Abstract

In the last two decades, many online fault/noise injection algorithms have been developed to attain a fault tolerant neural network. However, not much theoretical works related to their convergence and objective functions have been reported. This paper studies six common fault/noise-injection-based online learning algorithms for radial basis function (RBF) networks, namely 1) injecting additive input noise, 2) injecting additive/multiplicative weight noise, 3) injecting multiplicative node noise, 4) injecting multiweight fault (random disconnection of weights), 5) injecting multinode fault during training, and 6) weight decay with injecting multinode fault. Based on the Gladyshev theorem, we show that the convergence of these six online algorithms is almost sure. Moreover, their true objective functions being minimized are derived. For injecting additive input noise during training, the objective function is identical to that of the Tikhonov regularizer approach. For injecting additive/multiplicative weight noise during training, the objective function is the simple mean square training error. Thus, injecting additive/multiplicative weight noise during training cannot improve the fault tolerance of an RBF network. Similar to injective additive input noise, the objective functions of other fault/noise-injection-based online algorithms contain a mean square error term and a specialized regularization term.

摘要

在过去二十年中,已经开发了许多在线故障/噪声注入算法来实现容错神经网络。然而,关于它们的收敛性和目标函数的理论研究报道不多。本文研究了六种基于故障/噪声注入的径向基函数(RBF)网络在线学习算法,即1)注入加性输入噪声,2)注入加性/乘性权重噪声,3)注入乘性节点噪声,4)注入多权重故障(权重随机断开),5)在训练期间注入多节点故障,以及6)通过注入多节点故障进行权重衰减。基于格拉德舍夫定理,我们证明这六种在线算法几乎必然收敛。此外,还推导了它们被最小化的真实目标函数。对于在训练期间注入加性输入噪声,目标函数与蒂霍诺夫正则化方法的目标函数相同。对于在训练期间注入加性/乘性权重噪声,目标函数是简单的均方训练误差。因此,在训练期间注入加性/乘性权重噪声并不能提高RBF网络的容错能力。与注入加性输入噪声类似,其他基于故障/噪声注入的在线算法的目标函数包含一个均方误差项和一个专门的正则化项。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验