Suppr超能文献

反向传播和投影寻踪学习中的回归建模

Regression modeling in back-propagation and projection pursuit learning.

作者信息

Hwang J N, Lay S R, Maechler M, Martin R D, Schimert J

机构信息

Dept. of Electr. Eng., Washington Univ., Seattle, WA.

出版信息

IEEE Trans Neural Netw. 1994;5(3):342-53. doi: 10.1109/72.286906.

Abstract

We study and compare two types of connectionist learning methods for model-free regression problems: 1) the backpropagation learning (BPL); and 2) the projection pursuit learning (PPL) emerged in recent years in the statistical estimation literature. Both the BPL and the PPL are based on projections of the data in directions determined from interconnection weights. However, unlike the use of fixed nonlinear activations (usually sigmoidal) for the hidden neurons in BPL, the PPL systematically approximates the unknown nonlinear activations. Moreover, the BPL estimates all the weights simultaneously at each iteration, while the PPL estimates the weights cyclically (neuron-by-neuron and layer-by-layer) at each iteration. Although the BPL and the PPL have comparable training speed when based on a Gauss-Newton optimization algorithm, the PPL proves more parsimonious in that the PPL requires a fewer hidden neurons to approximate the true function. To further improve the statistical performance of the PPL, an orthogonal polynomial approximation is used in place of the supersmoother method originally proposed for nonlinear activation approximation in the PPL.

摘要

我们研究并比较了两种用于无模型回归问题的连接主义学习方法

1)反向传播学习(BPL);2)近年来在统计估计文献中出现的投影寻踪学习(PPL)。BPL和PPL都基于数据在由互连权重确定的方向上的投影。然而,与BPL中对隐藏神经元使用固定的非线性激活函数(通常为Sigmoid函数)不同,PPL系统地逼近未知的非线性激活函数。此外,BPL在每次迭代时同时估计所有权重,而PPL在每次迭代时按循环方式(逐个神经元、逐层)估计权重。尽管基于高斯-牛顿优化算法时BPL和PPL具有相当的训练速度,但PPL更为简约,因为PPL需要更少的隐藏神经元来逼近真实函数。为了进一步提高PPL的统计性能,使用正交多项式逼近替代了最初为PPL中的非线性激活逼近而提出的超平滑方法。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验