Suppr超能文献

具有赫布学习的霍普菲尔德网络中的原型分析

Prototype Analysis in Hopfield Networks With Hebbian Learning.

作者信息

McAlister Hayden, Robins Anthony, Szymanski Lech

机构信息

School of Computing, University of Otago, Dunedin 9016, New Zealand

出版信息

Neural Comput. 2024 Oct 11;36(11):2322-2364. doi: 10.1162/neco_a_01704.

Abstract

We discuss prototype formation in the Hopfield network. Typically, Hebbian learning with highly correlated states leads to degraded memory performance. We show that this type of learning can lead to prototype formation, where unlearned states emerge as representatives of large correlated subsets of states, alleviating capacity woes. This process has similarities to prototype learning in human cognition. We provide a substantial literature review of prototype learning in associative memories, covering contributions from psychology, statistical physics, and computer science. We analyze prototype formation from a theoretical perspective and derive a stability condition for these states based on the number of examples of the prototype presented for learning, the noise in those examples, and the number of nonexample states presented. The stability condition is used to construct a probability of stability for a prototype state as the factors of stability change. We also note similarities to traditional network analysis, allowing us to find a prototype capacity. We corroborate these expectations of prototype formation with experiments using a simple Hopfield network with standard Hebbian learning. We extend our experiments to a Hopfield network trained on data with multiple prototypes and find the network is capable of stabilizing multiple prototypes concurrently. We measure the basins of attraction of the multiple prototype states, finding attractor strength grows with the number of examples and the agreement of examples. We link the stability and dominance of prototype states to the energy profile of these states, particularly when comparing the profile shape to target states or other spurious states.

摘要

我们讨论了霍普菲尔德网络中的原型形成。通常,具有高度相关状态的赫布学习会导致记忆性能下降。我们表明,这种类型的学习会导致原型形成,即未学习的状态会作为大量相关状态子集的代表出现,从而缓解容量问题。这个过程与人类认知中的原型学习有相似之处。我们对联想记忆中的原型学习进行了大量文献综述,涵盖了心理学、统计物理学和计算机科学的贡献。我们从理论角度分析原型形成,并根据用于学习的原型示例数量、这些示例中的噪声以及呈现的非示例状态数量,推导出这些状态的稳定性条件。稳定性条件用于构建原型状态的稳定概率,因为稳定性因素会发生变化。我们还指出了与传统网络分析的相似之处,从而使我们能够找到原型容量。我们使用具有标准赫布学习的简单霍普菲尔德网络进行实验,证实了对原型形成的这些预期。我们将实验扩展到在具有多个原型的数据上训练的霍普菲尔德网络,发现该网络能够同时稳定多个原型。我们测量了多个原型状态的吸引盆,发现吸引子强度随着示例数量和示例的一致性而增加。我们将原型状态的稳定性和主导性与这些状态的能量分布联系起来,特别是在将分布形状与目标状态或其他虚假状态进行比较时。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验