Rouhani Modjtaba, Javan Dawood S
Faculty of engineering, Ferdowsi University of Mashhad, Mashhad, Iran.
Neural Netw. 2016 Mar;75:150-61. doi: 10.1016/j.neunet.2015.12.011. Epub 2016 Jan 4.
This paper presents new Radial Basis Function (RBF) learning methods for classification problems. The proposed methods use some heuristics to determine the spreads, the centers and the number of hidden neurons of network in such a way that the higher efficiency is achieved by fewer numbers of neurons, while the learning algorithm remains fast and simple. To retain network size limited, neurons are added to network recursively until termination condition is met. Each neuron covers some of train data. The termination condition is to cover all training data or to reach the maximum number of neurons. In each step, the center and spread of the new neuron are selected based on maximization of its coverage. Maximization of coverage of the neurons leads to a network with fewer neurons and indeed lower VC dimension and better generalization property. Using power exponential distribution function as the activation function of hidden neurons, and in the light of new learning approaches, it is proved that all data became linearly separable in the space of hidden layer outputs which implies that there exist linear output layer weights with zero training error. The proposed methods are applied to some well-known datasets and the simulation results, compared with SVM and some other leading RBF learning methods, show their satisfactory and comparable performance.
本文提出了用于分类问题的新型径向基函数(RBF)学习方法。所提出的方法使用一些启发式方法来确定网络的扩展、中心和隐藏神经元的数量,使得通过较少数量的神经元就能实现更高的效率,同时学习算法保持快速且简单。为了限制网络规模,神经元被递归地添加到网络中,直到满足终止条件。每个神经元覆盖一些训练数据。终止条件是覆盖所有训练数据或达到神经元的最大数量。在每一步中,基于新神经元覆盖范围的最大化来选择其中心和扩展。神经元覆盖范围的最大化会导致网络具有更少的神经元,实际上具有更低的VC维数和更好的泛化特性。使用幂指数分布函数作为隐藏神经元的激活函数,并根据新的学习方法,证明了所有数据在隐藏层输出空间中变得线性可分,这意味着存在具有零训练误差的线性输出层权重。所提出的方法应用于一些知名数据集,与支持向量机(SVM)和其他一些领先的RBF学习方法相比,仿真结果显示了它们令人满意且可比的性能。