• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于学习前馈神经网络的逐层方法与反向传播混合方法。

The layer-wise method and the backpropagation hybrid approach to learning a feedforward neural network.

作者信息

Rubanov N S

机构信息

Radiophysics Department, Belarussian State University, Minsk, Belarus.

出版信息

IEEE Trans Neural Netw. 2000;11(2):295-305. doi: 10.1109/72.839001.

DOI:10.1109/72.839001
PMID:18249761
Abstract

Feedforward neural networks (FNN's) have been proposed to solve complex problems in pattern recognition and classification and function approximation. Despite the general success of learning methods for FNN's, such as the backpropagation (BP) algorithm, second-order optimization algorithms and layer-wise learning algorithms, several drawbacks remain to be overcome. In particular, two major drawbacks are convergence to a local minima and long learning time. In this paper we propose an efficient learning method for a FNN that combines the BP strategy and optimization layer by layer. More precisely, we construct the layer-wise optimization method using the Taylor series expansion of nonlinear operators describing a FNN and propose to update weights of each layer by the BP-based Kaczmarz iterative procedure. The experimental results show that the new learning algorithm is stable, it reduces the learning time and demonstrates improvement of generalization results in comparison with other well-known methods.

摘要

前馈神经网络(FNN)已被提出用于解决模式识别、分类和函数逼近中的复杂问题。尽管FNN的学习方法取得了普遍成功,如反向传播(BP)算法、二阶优化算法和逐层学习算法,但仍有几个缺点有待克服。特别是,两个主要缺点是收敛到局部最小值和学习时间长。在本文中,我们提出了一种高效的FNN学习方法,该方法将BP策略和逐层优化相结合。更准确地说,我们使用描述FNN的非线性算子的泰勒级数展开构造逐层优化方法,并建议通过基于BP的卡兹马尔兹迭代过程更新每一层的权重。实验结果表明,新的学习算法是稳定的,它减少了学习时间,并与其他知名方法相比,泛化结果有所改善。

相似文献

1
The layer-wise method and the backpropagation hybrid approach to learning a feedforward neural network.用于学习前馈神经网络的逐层方法与反向传播混合方法。
IEEE Trans Neural Netw. 2000;11(2):295-305. doi: 10.1109/72.839001.
2
A fast U-D factorization-based learning algorithm with applications to nonlinear system modeling and identification.一种基于快速U-D分解的学习算法及其在非线性系统建模与辨识中的应用。
IEEE Trans Neural Netw. 1999;10(4):930-8. doi: 10.1109/72.774266.
3
Backpropagation algorithm adaptation parameters using learning automata.使用学习自动机的反向传播算法自适应参数。
Int J Neural Syst. 2001 Jun;11(3):219-28. doi: 10.1142/S0129065701000655.
4
On adaptive learning rate that guarantees convergence in feedforward networks.关于保证前馈网络收敛的自适应学习率。
IEEE Trans Neural Netw. 2006 Sep;17(5):1116-25. doi: 10.1109/TNN.2006.878121.
5
Stability analysis of a three-term backpropagation algorithm.一种三项反向传播算法的稳定性分析
Neural Netw. 2005 Dec;18(10):1341-7. doi: 10.1016/j.neunet.2005.04.007. Epub 2005 Aug 30.
6
A general backpropagation algorithm for feedforward neural networks learning.一种用于前馈神经网络学习的通用反向传播算法。
IEEE Trans Neural Netw. 2002;13(1):251-4. doi: 10.1109/72.977323.
7
New nonleast-squares neural network learning algorithms for hypothesis testing.
IEEE Trans Neural Netw. 1995;6(3):596-609. doi: 10.1109/72.377966.
8
An accelerated learning algorithm for multilayer perceptrons: optimization layer by layer.一种用于多层感知器的加速学习算法:逐层优化
IEEE Trans Neural Netw. 1995;6(1):31-42. doi: 10.1109/72.363452.
9
New learning automata based algorithms for adaptation of backpropagation algorithm parameters.基于新型学习自动机的反向传播算法参数自适应算法
Int J Neural Syst. 2002 Feb;12(1):45-67. doi: 10.1142/S012906570200090X.
10
Efficient learning algorithms for three-layer regular feedforward fuzzy neural networks.用于三层正则前馈模糊神经网络的高效学习算法。
IEEE Trans Neural Netw. 2004 May;15(3):545-58. doi: 10.1109/TNN.2004.824250.