Wang G J, Chen C C
Dept. of Mech. Eng., Nat. Chung-Hsing Univ., Taichung.
IEEE Trans Neural Netw. 1996;7(3):768-75. doi: 10.1109/72.501734.
A faster new learning algorithm to adjust the weights of the multilayer feedforward neural network is proposed. In this new algorithm, the weight matrix (W(2)) of the output layer and the output vector (Y) of the previous layer are treated as two variable sets. An optimal solution pair (W(2),Y(P)) is found to minimize the sum-square-error of the patterns input. Y(P)* is then used as the desired output of the previous layer. The optimal weight matrix and layer output vector of the hidden layers in the network is found with the same method as that used for the output layer. In addition, the dynamic forgetting factors method makes the proposed new algorithm even more powerful in dynamic system identification. Computer simulation shows that the new algorithm outmatches other learning algorithms both in converging speed and in computation time required.
提出了一种用于调整多层前馈神经网络权重的更快的新学习算法。在这种新算法中,输出层的权重矩阵(W(2))和前一层的输出向量(Y)被视为两个变量集。找到一个最优解对(W(2),Y(P)),以使输入模式的均方误差最小。然后将Y(P)*用作前一层的期望输出。使用与输出层相同的方法找到网络中隐藏层的最优权重矩阵和层输出向量。此外,动态遗忘因子方法使所提出的新算法在动态系统识别中更加强大。计算机仿真表明,新算法在收敛速度和所需计算时间方面均优于其他学习算法。