Hou Muzhou, Han Xuli
Central South University, Changsha, China.
IEEE Trans Neural Netw. 2010 Sep;21(9):1517-23. doi: 10.1109/TNN.2010.2055888. Epub 2010 Aug 5.
It is well known that single hidden layer feedforward networks with radial basis function (RBF) kernels are universal approximators when all the parameters of the networks are obtained through all kinds of algorithms. However, as observed in most neural network implementations, tuning all the parameters of the network may cause learning complicated, poor generalization, overtraining and unstable. Unlike conventional neural network theories, this brief gives a constructive proof for the fact that a decay RBF neural network with n+1 hidden neurons can interpolate n+1 multivariate samples with zero error. Then we prove that the given decay RBFs can uniformly approximate any continuous multivariate functions with arbitrary precision without training. The faster convergence and better generalization performance than conventional RBF algorithm, BP algorithm, extreme learning machine and support vector machines are shown by means of two numerical experiments.
众所周知,当通过各种算法获得具有径向基函数(RBF)核的单隐藏层前馈网络的所有参数时,它们是通用逼近器。然而,正如在大多数神经网络实现中所观察到的那样,调整网络的所有参数可能会导致学习复杂、泛化能力差、过度训练和不稳定。与传统神经网络理论不同,本简报为具有n + 1个隐藏神经元的衰减RBF神经网络能够以零误差插值n + 1个多元样本这一事实提供了建设性证明。然后我们证明,给定的衰减RBF可以在不进行训练的情况下以任意精度均匀逼近任何连续多元函数。通过两个数值实验表明,其收敛速度比传统RBF算法、BP算法、极限学习机和支持向量机更快,泛化性能更好。