• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种用于多层感知器的加速学习算法:逐层优化

An accelerated learning algorithm for multilayer perceptrons: optimization layer by layer.

作者信息

Ergezinger S, Thomsen E

机构信息

Inst. fur Allgemeine Nachrichtentech., Hannover Univ.

出版信息

IEEE Trans Neural Netw. 1995;6(1):31-42. doi: 10.1109/72.363452.

DOI:10.1109/72.363452
PMID:18263283
Abstract

Multilayer perceptrons are successfully used in an increasing number of nonlinear signal processing applications. The backpropagation learning algorithm, or variations hereof, is the standard method applied to the nonlinear optimization problem of adjusting the weights in the network in order to minimize a given cost function. However, backpropagation as a steepest descent approach is too slow for many applications. In this paper a new learning procedure is presented which is based on a linearization of the nonlinear processing elements and the optimization of the multilayer perceptron layer by layer. In order to limit the introduced linearization error a penalty term is added to the cost function. The new learning algorithm is applied to the problem of nonlinear prediction of chaotic time series. The proposed algorithm yields results in both accuracy and convergence rates which are orders of magnitude superior compared to conventional backpropagation learning.

摘要

多层感知器在越来越多的非线性信号处理应用中得到了成功应用。反向传播学习算法或其变体,是应用于网络中权重调整的非线性优化问题的标准方法,目的是使给定的代价函数最小化。然而,作为一种最速下降法,反向传播在许多应用中速度太慢。本文提出了一种新的学习过程,该过程基于非线性处理元件的线性化以及逐层对多层感知器进行优化。为了限制引入的线性化误差,在代价函数中添加了一个惩罚项。新的学习算法应用于混沌时间序列的非线性预测问题。与传统的反向传播学习相比,所提出的算法在精度和收敛速度方面都产生了数量级上更优的结果。

相似文献

1
An accelerated learning algorithm for multilayer perceptrons: optimization layer by layer.一种用于多层感知器的加速学习算法:逐层优化
IEEE Trans Neural Netw. 1995;6(1):31-42. doi: 10.1109/72.363452.
2
Fast training of multilayer perceptrons.多层感知器的快速训练
IEEE Trans Neural Netw. 1997;8(6):1314-20. doi: 10.1109/72.641454.
3
An accelerated learning algorithm for multilayer perceptron networks.一种用于多层感知器网络的加速学习算法。
IEEE Trans Neural Netw. 1994;5(3):493-7. doi: 10.1109/72.286921.
4
On the initialization and optimization of multilayer perceptrons.关于多层感知器的初始化与优化
IEEE Trans Neural Netw. 1994;5(5):738-51. doi: 10.1109/72.317726.
5
A new error function at hidden layers for past training of multilayer perceptrons.一种用于多层感知器过往训练的隐藏层新误差函数。
IEEE Trans Neural Netw. 1999;10(4):960-4. doi: 10.1109/72.774272.
6
The layer-wise method and the backpropagation hybrid approach to learning a feedforward neural network.用于学习前馈神经网络的逐层方法与反向传播混合方法。
IEEE Trans Neural Netw. 2000;11(2):295-305. doi: 10.1109/72.839001.
7
Decomposition Techniques for Multilayer Perceptron Training.多层感知器训练的分解技术。
IEEE Trans Neural Netw Learn Syst. 2016 Nov;27(11):2146-2159. doi: 10.1109/TNNLS.2015.2475621. Epub 2015 Sep 22.
8
A fast feedforward training algorithm using a modified form of the standard backpropagation algorithm.一种使用标准反向传播算法的修改形式的快速前馈训练算法。
IEEE Trans Neural Netw. 2001;12(2):424-30. doi: 10.1109/72.914537.
9
An equalized error backpropagation algorithm for the on-line training of multilayer perceptrons.一种用于多层感知器在线训练的均衡误差反向传播算法。
IEEE Trans Neural Netw. 2002;13(3):532-41. doi: 10.1109/TNN.2002.1000122.
10
Comments on "An accelerated learning algorithm for multilayer perceptrons: optimization layer by layer".对《一种用于多层感知器的加速学习算法:逐层优化》的评论
IEEE Trans Neural Netw. 1998;9(2):339-41. doi: 10.1109/72.661128.

引用本文的文献

1
Risk Analysis of A-H Share Connect Market Based on Deep Learning and BP Neural Network.基于深度学习和 BP 神经网络的 A-H 股互联互通市场风险分析。
Comput Intell Neurosci. 2022 Jul 21;2022:1921463. doi: 10.1155/2022/1921463. eCollection 2022.