Lu Y, Sundararajan N, Saratchandran P
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore.
Neural Comput. 1997 Feb 15;9(2):461-78. doi: 10.1162/neco.1997.9.2.461.
This article presents a sequential learning algorithm for function approximation and time-series prediction using a minimal radial basis function neural network (RBFNN). The algorithm combines the growth criterion of the resource-allocating network (RAN) of Platt (1991) with a pruning strategy based on the relative contribution of each hidden unit to the overall network output. The resulting network leads toward a minimal topology for the RBFNN. The performance of the algorithm is compared with RAN and the enhanced RAN algorithm of Kadirkamanathan and Niranjan (1993) for the following benchmark problems: (1) hearta from the benchmark problems database PROBEN1, (2) Hermite polynomial, and (3) Mackey-Glass chaotic time series. For these problems, the proposed algorithm is shown to realize RBFNNs with far fewer hidden neurons with better or same accuracy.
本文提出了一种使用最小径向基函数神经网络(RBFNN)进行函数逼近和时间序列预测的序贯学习算法。该算法将普拉特(1991年)的资源分配网络(RAN)的增长准则与基于每个隐藏单元对整个网络输出的相对贡献的剪枝策略相结合。由此产生的网络趋向于RBFNN的最小拓扑结构。针对以下基准问题,将该算法的性能与RAN以及卡迪尔卡马纳坦和尼兰詹(1993年)的增强RAN算法进行了比较:(1)来自基准问题数据库PROBEN1的心脏病数据,(2)埃尔米特多项式,以及(3)麦基-格拉斯混沌时间序列。对于这些问题,结果表明所提出的算法能够以少得多的隐藏神经元实现具有相同或更好精度的RBFNN。