Suppr超能文献

梦境神经网络:忘记虚假记忆,强化真实记忆。

Dreaming neural networks: Forgetting spurious memories and reinforcing pure ones.

机构信息

Dipartimento di Matematica e Fisica Ennio De Giorgi, Università del Salento, Italy; GNFM-INdAM Sezione di Lecce, Italy; INFN, Istituto Nazionale di Fisica Nucleare, Sezione di Lecce, Italy.

Dipartimento di Matematica, Sapienza Università di Roma, Italy; GNFM-INdAM Sezione di Roma, Italy.

出版信息

Neural Netw. 2019 Apr;112:24-40. doi: 10.1016/j.neunet.2019.01.006. Epub 2019 Jan 29.

Abstract

The standard Hopfield model for associative neural networks accounts for biological Hebbian learning and acts as the harmonic oscillator for pattern recognition, however its maximal storage capacity is α∼0.14, far from the theoretical bound for symmetric networks, i.e. α=1. Inspired by sleeping and dreaming mechanisms in mammal brains, we propose an extension of this model displaying the standard on-line (awake) learning mechanism (that allows the storage of external information in terms of patterns) and an off-line (sleep) unlearning&consolidating mechanism (that allows spurious-pattern removal and pure-pattern reinforcement): this obtained daily prescription is able to saturate the theoretical bound α=1, remaining also extremely robust against thermal noise. The emergent neural and synaptic features are analyzed both analytically and numerically. In particular, beyond obtaining a phase diagram for neural dynamics, we focus on synaptic plasticity and we give explicit prescriptions on the temporal evolution of the synaptic matrix. We analytically prove that our algorithm makes the Hebbian kernel converge with high probability to the projection matrix built over the pure stored patterns. Furthermore, we obtain a sharp and explicit estimate for the "sleep rate" in order to ensure such a convergence. Finally, we run extensive numerical simulations (mainly Monte Carlo sampling) to check the approximations underlying the analytical investigations (e.g., we developed the whole theory at the so called replica-symmetric level, as standard in the Amit-Gutfreund-Sompolinsky reference framework) and possible finite-size effects, finding overall full agreement with the theory.

摘要

关联神经网络的标准 Hopfield 模型解释了生物海伯学习,并充当模式识别的谐振器,但其最大存储容量为α∼0.14,远低于对称网络的理论极限,即α=1。受哺乳动物大脑中睡眠和做梦机制的启发,我们提出了对该模型的扩展,该模型显示了标准的在线(清醒)学习机制(允许以模式的形式存储外部信息)和离线(睡眠)遗忘和巩固机制(允许去除虚假模式和增强真实模式):这种日常的处方能够使理论极限α=1 饱和,并且对热噪声仍然具有极强的鲁棒性。我们对新兴的神经和突触特征进行了分析。特别是,除了获得神经动力学的相图外,我们还关注突触可塑性,并给出了关于突触矩阵时间演化的显式处方。我们从理论上证明,我们的算法使得海伯核以高概率收敛于构建在纯存储模式上的投影矩阵。此外,我们还获得了“睡眠率”的精确和显式估计,以确保这种收敛。最后,我们进行了广泛的数值模拟(主要是蒙特卡罗抽样)来检查分析研究的近似值(例如,我们在 Amit-Gutfreund-Sompolinsky 参考框架中标准的所谓复制对称水平上发展了整个理论)和可能的有限尺寸效应,发现总体上与理论完全一致。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验