IEEE Trans Neural Netw Learn Syst. 2012 Apr;23(4):609-19. doi: 10.1109/TNNLS.2012.2185059.
This paper proposes an improved second order (ISO) algorithm for training radial basis function (RBF) networks. Besides the traditional parameters, including centers, widths and output weights, the input weights on the connections between input layer and hidden layer are also adjusted during the training process. More accurate results can be obtained by increasing variable dimensions. Initial centers are chosen from training patterns and other parameters are generated randomly in limited range. Taking the advantages of fast convergence and powerful search ability of second order algorithms, the proposed ISO algorithm can normally reach smaller training/testing error with much less number of RBF units. During the computation process, quasi Hessian matrix and gradient vector are accumulated as the sum of related sub matrices and vectors, respectively. Only one Jacobian row is stored and used for multiplication, instead of the entire Jacobian matrix storage and multiplication. Memory reduction benefits the computation speed and allows the training of problems with basically unlimited number of patterns. Several practical discrete and continuous classification problems are applied to test the properties of the proposed ISO training algorithm.
本文提出了一种改进的二阶(ISO)算法,用于训练径向基函数(RBF)网络。除了传统的参数,包括中心、宽度和输出权重外,在训练过程中还调整了输入层和隐藏层之间连接的输入权重。通过增加变量维度,可以获得更准确的结果。初始中心是从训练模式中选择的,其他参数是在有限范围内随机生成的。利用二阶算法的快速收敛和强大搜索能力的优势,所提出的 ISO 算法通常可以用更少的 RBF 单元达到更小的训练/测试误差。在计算过程中,拟 Hessian 矩阵和梯度向量分别累积为相关子矩阵和向量的和。仅存储并使用一个 Jacobian 行进行乘法,而不是存储和乘法整个 Jacobian 矩阵。减少内存有助于提高计算速度,并允许对具有基本无限数量模式的问题进行训练。几个实际的离散和连续分类问题被应用于测试所提出的 ISO 训练算法的性能。