Department of Cognitive Science, University of California, Irvine, CA, USA.
Department Computer Science, University of California, Irvine, CA, USA.
Nat Commun. 2024 May 2;15(1):3722. doi: 10.1038/s41467-024-46976-4.
An important difference between brains and deep neural networks is the way they learn. Nervous systems learn online where a stream of noisy data points are presented in a non-independent, identically distributed way. Further, synaptic plasticity in the brain depends only on information local to synapses. Deep networks, on the other hand, typically use non-local learning algorithms and are trained in an offline, non-noisy, independent, identically distributed setting. Understanding how neural networks learn under the same constraints as the brain is an open problem for neuroscience and neuromorphic computing. A standard approach to this problem has yet to be established. In this paper, we propose that discrete graphical models that learn via an online maximum a posteriori learning algorithm could provide such an approach. We implement this kind of model in a neural network called the Sparse Quantized Hopfield Network. We show our model outperforms state-of-the-art neural networks on associative memory tasks, outperforms these networks in online, continual settings, learns efficiently with noisy inputs, and is better than baselines on an episodic memory task.
大脑和深度神经网络之间的一个重要区别在于它们的学习方式。神经系统在线学习,其中以非独立、同分布的方式呈现数据流的噪声点。此外,大脑中的突触可塑性仅取决于突触局部的信息。相比之下,深度网络通常使用非局部学习算法,并在离线、无噪声、独立、同分布的环境中进行训练。了解神经网络如何在与大脑相同的约束条件下学习,这是神经科学和神经形态计算领域的一个开放性问题。目前还没有建立一种标准的方法来解决这个问题。在本文中,我们提出,通过在线最大后验学习算法学习的离散图形模型可以提供这样一种方法。我们在称为稀疏量化霍普菲尔德网络的神经网络中实现了这种模型。我们的模型在联想记忆任务上优于最先进的神经网络,在在线、持续的环境中优于这些网络,能够有效地处理带有噪声的输入,并且在情节记忆任务上优于基线。