Sapienza University of Rome, Department of Mathematics, 00185, Rome, Italy.
Istituto Nazionale d'Alta Matematica, 00185, Rome, Italy
Neural Comput. 2023 Apr 18;35(5):930-957. doi: 10.1162/neco_a_01578.
Hebb's learning traces its origin in Pavlov's classical conditioning; however, while the former has been extensively modeled in the past decades (e.g., by the Hopfield model and countless variations on theme), as for the latter, modeling has remained largely unaddressed so far. Furthermore, a mathematical bridge connecting these two pillars is totally lacking. The main difficulty toward this goal lies in the intrinsically different scales of the information involved: Pavlov's theory is about correlations between concepts that are (dynamically) stored in the synaptic matrix as exemplified by the celebrated experiment starring a dog and a ringing bell; conversely, Hebb's theory is about correlations between pairs of neurons as summarized by the famous statement that neurons that fire together wire together. In this letter, we rely on stochastic process theory to prove that as long as we keep neurons' and synapses' timescales largely split, Pavlov's mechanism spontaneously takes place and ultimately gives rise to synaptic weights that recover the Hebbian kernel.
Hebb 学习的起源可以追溯到巴甫洛夫的经典条件作用;然而,尽管前者在过去几十年中得到了广泛的建模(例如,通过 Hopfield 模型和无数主题的变体),但对于后者,建模至今仍未得到充分解决。此外,这两个支柱之间的数学桥梁完全缺失。实现这一目标的主要困难在于所涉及信息的固有不同尺度:巴甫洛夫的理论是关于概念之间的相关性,这些相关性作为突触矩阵中的动态存储,正如著名的狗和铃声实验所证明的那样;相反,赫布的理论是关于神经元对之间的相关性,正如著名的陈述所总结的那样,一起发射的神经元会一起连接。在这封信中,我们依赖随机过程理论来证明,只要我们保持神经元和突触的时间尺度基本分开,巴甫洛夫的机制就会自发发生,并最终导致恢复赫布核的突触权重。