Suppr超能文献

隐藏超图、纠错码与霍普菲尔德网络中的临界学习

Hidden Hypergraphs, Error-Correcting Codes, and Critical Learning in Hopfield Networks.

作者信息

Hillar Christopher, Chan Tenzin, Taubman Rachel, Rolnick David

机构信息

Awecom, Inc., San Francisco, CA 94103, USA.

Singapore University of Technology and Design, Singapore 487372, Singapore.

出版信息

Entropy (Basel). 2021 Nov 11;23(11):1494. doi: 10.3390/e23111494.

Abstract

In 1943, McCulloch and Pitts introduced a discrete recurrent neural network as a model for computation in brains. The work inspired breakthroughs such as the first computer design and the theory of finite automata. We focus on learning in Hopfield networks, a special case with symmetric weights and fixed-point attractor dynamics. Specifically, we explore minimum energy flow (MEF) as a scalable convex objective for determining network parameters. We catalog various properties of MEF, such as biological plausibility, and then compare to classical approaches in the theory of learning. Trained Hopfield networks can perform unsupervised clustering and define novel error-correcting coding schemes. They also efficiently find hidden structures (cliques) in graph theory. We extend this known connection from graphs to hypergraphs and discover -node networks with robust storage of 2Ω(n1-ϵ) memories for any ϵ>0. In the case of graphs, we also determine a critical ratio of training samples at which networks generalize completely.

摘要

1943年,麦卡洛克和皮茨引入了一种离散递归神经网络作为大脑计算模型。这项工作激发了诸如第一台计算机设计和有限自动机理论等突破。我们专注于霍普菲尔德网络中的学习,这是一种具有对称权重和定点吸引子动力学的特殊情况。具体而言,我们探索最小能量流(MEF)作为确定网络参数的可扩展凸目标。我们列举了MEF的各种属性,如生物学合理性,然后与学习理论中的经典方法进行比较。经过训练的霍普菲尔德网络可以执行无监督聚类并定义新颖的纠错编码方案。它们还能在图论中高效地找到隐藏结构(团)。我们将这种已知的从图到超图的联系进行扩展,并发现对于任何ϵ>0都具有2Ω(n1-ϵ)个稳健存储记忆的节点网络。在图的情况下,我们还确定了网络完全泛化的训练样本的临界比率。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d1a1/8622935/2643e24e73fd/entropy-23-01494-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验