Suppr超能文献

一种用于多层感知器的加速学习算法:逐层优化

An accelerated learning algorithm for multilayer perceptrons: optimization layer by layer.

作者信息

Ergezinger S, Thomsen E

机构信息

Inst. fur Allgemeine Nachrichtentech., Hannover Univ.

出版信息

IEEE Trans Neural Netw. 1995;6(1):31-42. doi: 10.1109/72.363452.

Abstract

Multilayer perceptrons are successfully used in an increasing number of nonlinear signal processing applications. The backpropagation learning algorithm, or variations hereof, is the standard method applied to the nonlinear optimization problem of adjusting the weights in the network in order to minimize a given cost function. However, backpropagation as a steepest descent approach is too slow for many applications. In this paper a new learning procedure is presented which is based on a linearization of the nonlinear processing elements and the optimization of the multilayer perceptron layer by layer. In order to limit the introduced linearization error a penalty term is added to the cost function. The new learning algorithm is applied to the problem of nonlinear prediction of chaotic time series. The proposed algorithm yields results in both accuracy and convergence rates which are orders of magnitude superior compared to conventional backpropagation learning.

摘要

多层感知器在越来越多的非线性信号处理应用中得到了成功应用。反向传播学习算法或其变体,是应用于网络中权重调整的非线性优化问题的标准方法,目的是使给定的代价函数最小化。然而,作为一种最速下降法,反向传播在许多应用中速度太慢。本文提出了一种新的学习过程,该过程基于非线性处理元件的线性化以及逐层对多层感知器进行优化。为了限制引入的线性化误差,在代价函数中添加了一个惩罚项。新的学习算法应用于混沌时间序列的非线性预测问题。与传统的反向传播学习相比,所提出的算法在精度和收敛速度方面都产生了数量级上更优的结果。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验