Folli Viola, Leonetti Marco, Ruocco Giancarlo
Center for Life Nanoscience, Istituto Italiano di Tecnologia Rome, Italy.
Center for Life Nanoscience, Istituto Italiano di TecnologiaRome, Italy; Department of Physics, Sapienza University of RomeRome, Italy.
Front Comput Neurosci. 2017 Jan 10;10:144. doi: 10.3389/fncom.2016.00144. eCollection 2016.
Recurrent neural networks (RNN) have traditionally been of great interest for their capacity to store memories. In past years, several works have been devoted to determine the maximum storage capacity of RNN, especially for the case of the Hopfield network, the most popular kind of RNN. Analyzing the thermodynamic limit of the statistical properties of the Hamiltonian corresponding to the Hopfield neural network, it has been shown in the literature that the retrieval errors diverge when the number of stored memory patterns () exceeds a fraction (≈ 14%) of the network size . In this paper, we study the storage performance of a generalized Hopfield model, where the diagonal elements of the connection matrix are allowed to be different from zero. We investigate this model at finite . We give an analytical expression for the number of retrieval errors and show that, by increasing the number of stored patterns over a certain threshold, the errors start to decrease and reach values below unit for ≫ . We demonstrate that the strongest trade-off between efficiency and effectiveness relies on the number of patterns () that are stored in the network by appropriately fixing the connection weights. When ≫ and the diagonal elements of the adjacency matrix are not forced to be zero, the optimal storage capacity is obtained with a number of stored memories much larger than previously reported. This theory paves the way to the design of RNN with high storage capacity and able to retrieve the desired pattern without distortions.
递归神经网络(RNN)传统上因其存储记忆的能力而备受关注。在过去几年中,有几项工作致力于确定RNN的最大存储容量,特别是对于霍普菲尔德网络这种最流行的RNN情况。通过分析与霍普菲尔德神经网络相对应的哈密顿量统计特性的热力学极限,文献表明当存储记忆模式的数量()超过网络规模的一定比例(≈14%)时,检索误差会发散。在本文中,我们研究了一种广义霍普菲尔德模型的存储性能,其中连接矩阵的对角元素允许不为零。我们在有限情况下研究该模型。我们给出了检索误差数量的解析表达式,并表明通过将存储模式的数量增加到某个阈值以上,误差开始减小,并在≫时达到低于单位值的值。我们证明,通过适当固定连接权重,效率和有效性之间最强的权衡取决于存储在网络中的模式数量()。当≫且邻接矩阵的对角元素不强制为零时,通过比先前报道的大得多的存储记忆数量可获得最佳存储容量。该理论为设计具有高存储容量且能够无失真地检索所需模式的RNN铺平了道路。