Suppr超能文献

一种用于多层感知器过往训练的隐藏层新误差函数。

A new error function at hidden layers for past training of multilayer perceptrons.

作者信息

Oh S H, Lee S Y

出版信息

IEEE Trans Neural Netw. 1999;10(4):960-4. doi: 10.1109/72.774272.

Abstract

This letter proposes a new error function at hidden layers to speed up the training of multilayer perceptrons (MLP's). With this new hidden error function, the layer-by-layer (LBL) algorithm approximately converges to the error backpropagation algorithm with optimum learning rates. Especially, the optimum learning rate for a hidden weight vector appears approximately as a multiplication of two optimum factors, one for minimizing the new hidden error function and the other for assigning hidden targets. Effectiveness of the proposed error function was demonstrated for handwritten digit recognition and isolated-word recognition tasks. Very fast learning convergence was obtained for MLP's without the stalling problem experienced in conventional LBL algorithms.

摘要

本文提出了一种用于隐藏层的新误差函数,以加速多层感知器(MLP)的训练。借助这种新的隐藏误差函数,逐层(LBL)算法在最优学习率下近似收敛于误差反向传播算法。特别地,隐藏权重向量的最优学习率大约表现为两个最优因子的乘积,一个用于最小化新的隐藏误差函数,另一个用于分配隐藏目标。所提出的误差函数在手写数字识别和孤立词识别任务中得到了验证。对于MLP,获得了非常快速的学习收敛,且没有传统LBL算法中出现的停滞问题。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验