Oh S H, Lee S Y
IEEE Trans Neural Netw. 1999;10(4):960-4. doi: 10.1109/72.774272.
This letter proposes a new error function at hidden layers to speed up the training of multilayer perceptrons (MLP's). With this new hidden error function, the layer-by-layer (LBL) algorithm approximately converges to the error backpropagation algorithm with optimum learning rates. Especially, the optimum learning rate for a hidden weight vector appears approximately as a multiplication of two optimum factors, one for minimizing the new hidden error function and the other for assigning hidden targets. Effectiveness of the proposed error function was demonstrated for handwritten digit recognition and isolated-word recognition tasks. Very fast learning convergence was obtained for MLP's without the stalling problem experienced in conventional LBL algorithms.
本文提出了一种用于隐藏层的新误差函数,以加速多层感知器(MLP)的训练。借助这种新的隐藏误差函数,逐层(LBL)算法在最优学习率下近似收敛于误差反向传播算法。特别地,隐藏权重向量的最优学习率大约表现为两个最优因子的乘积,一个用于最小化新的隐藏误差函数,另一个用于分配隐藏目标。所提出的误差函数在手写数字识别和孤立词识别任务中得到了验证。对于MLP,获得了非常快速的学习收敛,且没有传统LBL算法中出现的停滞问题。