Chen S, Cowan C N, Grant P M
Dept. of Electr. Eng., Edinburgh Univ.
IEEE Trans Neural Netw. 1991;2(2):302-9. doi: 10.1109/72.80341.
The radial basis function network offers a viable alternative to the two-layer neural network in many applications of signal processing. A common learning algorithm for radial basis function networks is based on first choosing randomly some data points as radial basis function centers and then using singular-value decomposition to solve for the weights of the network. Such a procedure has several drawbacks, and, in particular, an arbitrary selection of centers is clearly unsatisfactory. The authors propose an alternative learning procedure based on the orthogonal least-squares method. The procedure chooses radial basis function centers one by one in a rational way until an adequate network has been constructed. In the algorithm, each selected center maximizes the increment to the explained variance or energy of the desired output and does not suffer numerical ill-conditioning problems. The orthogonal least-squares learning strategy provides a simple and efficient means for fitting radial basis function networks. This is illustrated using examples taken from two different signal processing applications.
在许多信号处理应用中,径向基函数网络为两层神经网络提供了一种可行的替代方案。径向基函数网络的一种常见学习算法是,首先随机选择一些数据点作为径向基函数中心,然后使用奇异值分解来求解网络的权重。这样的过程有几个缺点,特别是中心的任意选择显然不能令人满意。作者提出了一种基于正交最小二乘法的替代学习过程。该过程以合理的方式逐个选择径向基函数中心,直到构建出一个合适的网络。在该算法中,每个选定的中心使期望输出的解释方差或能量的增量最大化,并且不会出现数值病态问题。正交最小二乘学习策略为拟合径向基函数网络提供了一种简单而有效的方法。这通过取自两个不同信号处理应用的示例进行了说明。