Laboratory of Physics of the Ecole Normale Supérieure, CNRS UMR 8023 & PSL Research, 24 rue Lhomond, 75005 Paris, France.
Phys Rev Lett. 2020 Jan 31;124(4):048302. doi: 10.1103/PhysRevLett.124.048302.
Recurrent neural networks (RNN) are powerful tools to explain how attractors may emerge from noisy, high-dimensional dynamics. We study here how to learn the ∼N^{2} pairwise interactions in a RNN with N neurons to embed L manifolds of dimension D≪N. We show that the capacity, i.e., the maximal ratio L/N, decreases as |logε|^{-D}, where ε is the error on the position encoded by the neural activity along each manifold. Hence, RNN are flexible memory devices capable of storing a large number of manifolds at high spatial resolution. Our results rely on a combination of analytical tools from statistical mechanics and random matrix theory, extending Gardner's classical theory of learning to the case of patterns with strong spatial correlations.
递归神经网络(RNN)是解释吸引子如何从嘈杂的高维动力学中出现的强大工具。我们在这里研究如何学习具有 N 个神经元的 RNN 中 ∼N^{2}的成对相互作用,以嵌入维度 D≪N 的 L 个流形。我们表明,容量,即 L/N 的最大比,随着位置编码的神经活动沿每个流形的误差 ε 的 |logε|^{-D}而减小。因此,RNN 是灵活的存储设备,能够以高空间分辨率存储大量流形。我们的结果依赖于统计力学和随机矩阵理论的分析工具的组合,将 Gardner 的经典学习理论扩展到具有强空间相关性的模式的情况。