Hillar Christopher, Chan Tenzin, Taubman Rachel, Rolnick David
Awecom, Inc., San Francisco, CA 94103, USA.
Singapore University of Technology and Design, Singapore 487372, Singapore.
Entropy (Basel). 2021 Nov 11;23(11):1494. doi: 10.3390/e23111494.
In 1943, McCulloch and Pitts introduced a discrete recurrent neural network as a model for computation in brains. The work inspired breakthroughs such as the first computer design and the theory of finite automata. We focus on learning in Hopfield networks, a special case with symmetric weights and fixed-point attractor dynamics. Specifically, we explore minimum energy flow (MEF) as a scalable convex objective for determining network parameters. We catalog various properties of MEF, such as biological plausibility, and then compare to classical approaches in the theory of learning. Trained Hopfield networks can perform unsupervised clustering and define novel error-correcting coding schemes. They also efficiently find hidden structures (cliques) in graph theory. We extend this known connection from graphs to hypergraphs and discover -node networks with robust storage of 2Ω(n1-ϵ) memories for any ϵ>0. In the case of graphs, we also determine a critical ratio of training samples at which networks generalize completely.
1943年,麦卡洛克和皮茨引入了一种离散递归神经网络作为大脑计算模型。这项工作激发了诸如第一台计算机设计和有限自动机理论等突破。我们专注于霍普菲尔德网络中的学习,这是一种具有对称权重和定点吸引子动力学的特殊情况。具体而言,我们探索最小能量流(MEF)作为确定网络参数的可扩展凸目标。我们列举了MEF的各种属性,如生物学合理性,然后与学习理论中的经典方法进行比较。经过训练的霍普菲尔德网络可以执行无监督聚类并定义新颖的纠错编码方案。它们还能在图论中高效地找到隐藏结构(团)。我们将这种已知的从图到超图的联系进行扩展,并发现对于任何ϵ>0都具有2Ω(n1-ϵ)个稳健存储记忆的节点网络。在图的情况下,我们还确定了网络完全泛化的训练样本的临界比率。