Zhang Y, Li X R
Department of Electrical and Computer Engineering, The University of Western Ontario, London, Ontario N6A 5B9, Canada.
IEEE Trans Neural Netw. 1999;10(4):930-8. doi: 10.1109/72.774266.
A fast learning algorithm for training multilayer feedforward neural networks (FNN's) by using a fading memory extended Kalman filter (FMEKF) is presented first, along with a technique using a self-adjusting time-varying forgetting factor. Then a U-D factorization-based FMEKF is proposed to further improve the learning rate and accuracy of the FNN. In comparison with the backpropagation (BP) and existing EKF-based learning algorithms, the proposed U-D factorization-based FMEKF algorithm provides much more accurate learning results, using fewer hidden nodes. It has improved convergence rate and numerical stability (robustness). In addition, it is less sensitive to start-up parameters (e.g., initial weights and covariance matrix) and the randomness in the observed data. It also has good generalization ability and needs less training time to achieve a specified learning accuracy. Simulation results in modeling and identification of nonlinear dynamic systems are given to show the effectiveness and efficiency of the proposed algorithm.
首先提出了一种使用渐消记忆扩展卡尔曼滤波器(FMEKF)训练多层前馈神经网络(FNN)的快速学习算法,以及一种使用自调整时变遗忘因子的技术。然后提出了一种基于U-D分解的FMEKF,以进一步提高FNN的学习率和准确性。与反向传播(BP)和现有的基于EKF的学习算法相比,所提出的基于U-D分解的FMEKF算法使用更少的隐藏节点就能提供更准确的学习结果。它具有更高的收敛速度和数值稳定性(鲁棒性)。此外,它对初始参数(如初始权重和协方差矩阵)以及观测数据中的随机性不太敏感。它还具有良好的泛化能力,并且需要更少的训练时间来达到指定的学习精度。给出了在非线性动态系统建模与辨识中的仿真结果,以表明所提算法的有效性和高效性。