Suppr超能文献

Learning with incomplete information in the committee machine.

作者信息

Bergmann Urs M, Kühn Reimer, Stamatescu Ion-Olimpiu

机构信息

Institut für Theoretische Physik, Universität Heidelberg, Heidelberg, Germany.

出版信息

Biol Cybern. 2009 Dec;101(5-6):401-10. doi: 10.1007/s00422-009-0345-2. Epub 2009 Nov 4.

Abstract

We study the problem of learning with incomplete information in a student-teacher setup for the committee machine. The learning algorithm combines unsupervised Hebbian learning of a series of associations with a delayed reinforcement step, in which the set of previously learnt associations is partly and indiscriminately unlearnt, to an extent that depends on the success rate of the student on these previously learnt associations. The relevant learning parameter lambda represents the strength of Hebbian learning. A coarse-grained analysis of the system yields a set of differential equations for overlaps of student and teacher weight vectors, whose solutions provide a complete description of the learning behavior. It reveals complicated dynamics showing that perfect generalization can be obtained if the learning parameter exceeds a threshold lambda ( c ), and if the initial value of the overlap between student and teacher weights is non-zero. In case of convergence, the generalization error exhibits a power law decay as a function of the number of examples used in training, with an exponent that depends on the parameter lambda. An investigation of the system flow in a subspace with broken permutation symmetry between hidden units reveals a bifurcation point lambda* above which perfect generalization does not depend on initial conditions. Finally, we demonstrate that cases of a complexity mismatch between student and teacher are optimally resolved in the sense that an over-complex student can emulate a less complex teacher rule, while an under-complex student reaches a state which realizes the minimal generalization error compatible with the complexity mismatch.

摘要

相似文献

1
Learning with incomplete information in the committee machine.
Biol Cybern. 2009 Dec;101(5-6):401-10. doi: 10.1007/s00422-009-0345-2. Epub 2009 Nov 4.
2
Learning with incomplete information and the mathematical structure behind it.
Biol Cybern. 2007 Jul;97(1):99-112. doi: 10.1007/s00422-007-0162-4. Epub 2007 May 30.
3
A learning rule for very simple universal approximators consisting of a single layer of perceptrons.
Neural Netw. 2008 Jun;21(5):786-95. doi: 10.1016/j.neunet.2007.12.036. Epub 2007 Dec 31.
4
Subspace information criterion for model selection.
Neural Comput. 2001 Aug;13(8):1863-89. doi: 10.1162/08997660152469387.
5
Nonlinear complex-valued extensions of Hebbian learning: an essay.
Neural Comput. 2005 Apr;17(4):779-838. doi: 10.1162/0899766053429381.
6
The No-Prop algorithm: a new learning algorithm for multilayer neural networks.
Neural Netw. 2013 Jan;37:182-8. doi: 10.1016/j.neunet.2012.09.020. Epub 2012 Oct 15.
7
Improving generalization capabilities of dynamic neural networks.
Neural Comput. 2004 Jun;16(6):1253-82. doi: 10.1162/089976604773717603.
8
Noise, regularizers, and unrealizable scenarios in online learning from restricted training sets.
Phys Rev E Stat Nonlin Soft Matter Phys. 2001 Jul;64(1 Pt 1):011919. doi: 10.1103/PhysRevE.64.011919. Epub 2001 Jun 27.
9
Dimensional reduction for reward-based learning.
Network. 2006 Sep;17(3):235-52. doi: 10.1080/09548980600773215.
10
A fast and convergent stochastic MLP learning algorithm.
Int J Neural Syst. 2001 Dec;11(6):573-83. doi: 10.1142/S0129065701000977.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验