Suppr超能文献

The interchangeability of learning rate and gain in backpropagation neural networks.

作者信息

Thimm G, Moerland P, Fiesler E

机构信息

IDIAP, CH-1920 Martigny, Switzerland.

出版信息

Neural Comput. 1996 Feb 15;8(2):451-60. doi: 10.1162/neco.1996.8.2.451.

Abstract

The backpropagation algorithm is widely used for training multilayer neural networks. In this publication the gain of its activation function(s) is investigated. In specific, it is proven that changing the gain of the activation function is equivalent to changing the learning rate and the weights. This simplifies the backpropagation learning rule by eliminating one of its parameters. The theorem can be extended to hold for some well-known variations on the backpropagation algorithm, such as using a momentum term, flat spot elimination, or adaptive gain. Furthermore, it is successfully applied to compensate for the nonstandard gain of optical sigmoids for optical neural networks.

摘要

相似文献

1
The interchangeability of learning rate and gain in backpropagation neural networks.
Neural Comput. 1996 Feb 15;8(2):451-60. doi: 10.1162/neco.1996.8.2.451.
3
New learning automata based algorithms for adaptation of backpropagation algorithm parameters.
Int J Neural Syst. 2002 Feb;12(1):45-67. doi: 10.1142/S012906570200090X.
4
Hinfinity-learning of layered neural networks.
IEEE Trans Neural Netw. 2001;12(6):1265-77. doi: 10.1109/72.963763.
6
Backpropagation and ordered derivatives in the time scales calculus.
IEEE Trans Neural Netw. 2010 Aug;21(8):1262-9. doi: 10.1109/TNN.2010.2050332. Epub 2010 Jul 8.
7
Multiple disorder diagnosis with adaptive competitive neural networks.
Artif Intell Med. 1993 Dec;5(6):469-87. doi: 10.1016/0933-3657(93)90038-5.
8
Equivalence of backpropagation and contrastive Hebbian learning in a layered network.
Neural Comput. 2003 Feb;15(2):441-54. doi: 10.1162/089976603762552988.
9
Neural networks learning with sliding mode control: the sliding mode backpropagation algorithm.
Int J Neural Syst. 1999 Jun;9(3):187-93. doi: 10.1142/s0129065799000174.
10
TAO-robust backpropagation learning algorithm.
Neural Netw. 2005 Mar;18(2):191-204. doi: 10.1016/j.neunet.2004.11.007.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验