• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

无推动算法:一种多层神经网络的新学习算法。

The No-Prop algorithm: a new learning algorithm for multilayer neural networks.

机构信息

ISL, Department of Electrical Engineering, Stanford University, CA, USA.

出版信息

Neural Netw. 2013 Jan;37:182-8. doi: 10.1016/j.neunet.2012.09.020. Epub 2012 Oct 15.

DOI:10.1016/j.neunet.2012.09.020
PMID:23140797
Abstract

A new learning algorithm for multilayer neural networks that we have named No-Propagation (No-Prop) is hereby introduced. With this algorithm, the weights of the hidden-layer neurons are set and fixed with random values. Only the weights of the output-layer neurons are trained, using steepest descent to minimize mean square error, with the LMS algorithm of Widrow and Hoff. The purpose of introducing nonlinearity with the hidden layers is examined from the point of view of Least Mean Square Error Capacity (LMS Capacity), which is defined as the maximum number of distinct patterns that can be trained into the network with zero error. This is shown to be equal to the number of weights of each of the output-layer neurons. The No-Prop algorithm and the Back-Prop algorithm are compared. Our experience with No-Prop is limited, but from the several examples presented here, it seems that the performance regarding training and generalization of both algorithms is essentially the same when the number of training patterns is less than or equal to LMS Capacity. When the number of training patterns exceeds Capacity, Back-Prop is generally the better performer. But equivalent performance can be obtained with No-Prop by increasing the network Capacity by increasing the number of neurons in the hidden layer that drives the output layer. The No-Prop algorithm is much simpler and easier to implement than Back-Prop. Also, it converges much faster. It is too early to definitively say where to use one or the other of these algorithms. This is still a work in progress.

摘要

我们提出了一种新的多层神经网络学习算法,名为无传播(No-Prop)。在这个算法中,隐藏层神经元的权重被设置为随机值并固定。只有输出层神经元的权重使用最陡下降法进行训练,以最小化均方误差,使用 Widrow 和 Hoff 的 LMS 算法。从最小均方误差容量(LMS Capacity)的角度来看,引入隐藏层非线性的目的是检查可以用零误差训练到网络中的不同模式的最大数量。这被证明等于每个输出层神经元的权重数量。比较了 No-Prop 算法和 Back-Prop 算法。我们的 No-Prop 经验有限,但从这里呈现的几个例子来看,当训练模式的数量小于或等于 LMS Capacity 时,这两个算法的训练和泛化性能似乎基本相同。当训练模式的数量超过容量时,Back-Prop 通常是更好的执行者。但是,通过增加驱动输出层的隐藏层中的神经元数量来增加网络容量,也可以使用 No-Prop 获得等效性能。No-Prop 算法比 Back-Prop 简单得多,易于实现。此外,它的收敛速度也快得多。现在还不能确定在何处使用这两种算法中的一种。这仍然是一个正在进行的工作。

相似文献

1
The No-Prop algorithm: a new learning algorithm for multilayer neural networks.无推动算法:一种多层神经网络的新学习算法。
Neural Netw. 2013 Jan;37:182-8. doi: 10.1016/j.neunet.2012.09.020. Epub 2012 Oct 15.
2
A learning rule for very simple universal approximators consisting of a single layer of perceptrons.一种由单层感知器组成的非常简单的通用逼近器的学习规则。
Neural Netw. 2008 Jun;21(5):786-95. doi: 10.1016/j.neunet.2007.12.036. Epub 2007 Dec 31.
3
An improvement of extreme learning machine for compact single-hidden-layer feedforward neural networks.用于紧凑型单隐层前馈神经网络的极限学习机改进方法。
Int J Neural Syst. 2008 Oct;18(5):433-41. doi: 10.1142/S0129065708001695.
4
A new constructive algorithm for architectural and functional adaptation of artificial neural networks.一种用于人工神经网络架构和功能自适应的新型构造算法。
IEEE Trans Syst Man Cybern B Cybern. 2009 Dec;39(6):1590-605. doi: 10.1109/TSMCB.2009.2021849. Epub 2009 Jun 5.
5
Least Square Fast Learning Network for modeling the combustion efficiency of a 300WM coal-fired boiler.最小二乘快速学习网络在 300WM 燃煤锅炉燃烧效率建模中的应用。
Neural Netw. 2014 Mar;51:57-66. doi: 10.1016/j.neunet.2013.12.006. Epub 2013 Dec 16.
6
Single-hidden-layer feed-forward quantum neural network based on Grover learning.基于 Grover 学习的单隐藏层前馈量子神经网络。
Neural Netw. 2013 Sep;45:144-50. doi: 10.1016/j.neunet.2013.02.012. Epub 2013 Mar 14.
7
Training pi-sigma network by online gradient algorithm with penalty for small weight update.通过在线梯度算法训练pi-sigma网络,并对小权重更新进行惩罚。
Neural Comput. 2007 Dec;19(12):3356-68. doi: 10.1162/neco.2007.19.12.3356.
8
Novel maximum-margin training algorithms for supervised neural networks.用于监督神经网络的新型最大间隔训练算法。
IEEE Trans Neural Netw. 2010 Jun;21(6):972-84. doi: 10.1109/TNN.2010.2046423. Epub 2010 Apr 19.
9
A fast multilayer neural-network training algorithm based on the layer-by-layer optimizing procedures.一种基于逐层优化过程的快速多层神经网络训练算法。
IEEE Trans Neural Netw. 1996;7(3):768-75. doi: 10.1109/72.501734.
10
Efficient self-organizing multilayer neural network for nonlinear system modeling.高效自组织多层神经网络的非线性系统建模。
Neural Netw. 2013 Jul;43:22-32. doi: 10.1016/j.neunet.2013.01.015. Epub 2013 Feb 12.

引用本文的文献

1
A Comparison Study of Machine Learning Based Algorithms for Fatigue Crack Growth Calculation.基于机器学习的疲劳裂纹扩展计算算法的比较研究
Materials (Basel). 2017 May 18;10(5):543. doi: 10.3390/ma10050543.
2
Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the 'Extreme Learning Machine' Algorithm.通过使用“极限学习机”算法训练浅层神经网络分类器实现快速、简单且准确的手写数字分类
PLoS One. 2015 Aug 11;10(8):e0134254. doi: 10.1371/journal.pone.0134254. eCollection 2015.
3
Sparse extreme learning machine for classification.
稀疏极限学习机分类法。
IEEE Trans Cybern. 2014 Oct;44(10):1858-70. doi: 10.1109/TCYB.2014.2298235.