Dipartimento di Fisica, Sapienza Università di Roma, P.le A. Moro 2, 00185 Roma, Italy.
Laboratoire de Physique de l'Ecole Normale Supérieure, ENS, Université PSL, CNRS, Sorbonne Université, Université de Paris, F-75005 Paris, France.
J Chem Phys. 2022 Mar 14;156(10):104107. doi: 10.1063/5.0084219.
The Hebbian unlearning algorithm, i.e., an unsupervised local procedure used to improve the retrieval properties in Hopfield-like neural networks, is numerically compared to a supervised algorithm to train a linear symmetric perceptron. We analyze the stability of the stored memories: basins of attraction obtained by the Hebbian unlearning technique are found to be comparable in size to those obtained in the symmetric perceptron, while the two algorithms are found to converge in the same region of Gardner's space of interactions, having followed similar learning paths. A geometric interpretation of Hebbian unlearning is proposed to explain its optimal performances. Because the Hopfield model is also a prototypical model of the disordered magnetic system, it might be possible to translate our results to other models of interest for memory storage in materials.
海布无学习算法,即一种用于改进类似霍普菲尔德神经网络检索性能的无监督局部过程,被数值上与一种用于训练线性对称感知器的监督算法进行了比较。我们分析了存储记忆的稳定性:通过海布无学习技术获得的吸引盆地的大小被发现与对称感知器获得的吸引盆地相当,而这两种算法被发现收敛于相同的加德纳相互作用空间区域,遵循相似的学习路径。提出了海布无学习的几何解释,以解释其最佳性能。由于霍普菲尔德模型也是无序磁系统的典型模型,因此我们的结果可能可以转化为其他对材料存储有兴趣的模型。