Scheler Gabriele, Schumann Martin L, Schumann Johann
Carl Correns Foundation for Mathematical Biology, 1030 Judson Dr, Mountain View, CA, 94040, USA.
Dept. of Computer Science, Ludwig Maximilian University, Munich, Germany.
J Comput Neurosci. 2025 Mar 22. doi: 10.1007/s10827-025-00901-w.
We present a model of pattern memory and retrieval with novel, technically useful and biologically realistic properties. Specifically, we enter n variations of k pattern classes (n*k patterns) onto a cortex-like balanced inhibitory-excitatory network with heterogeneous neurons, and let the pattern spread within the recurrent network. We show that we can identify high mutual-information (MI) neurons as major information-bearing elements within each pattern representation. We employ a simple one-shot adaptive (learning) process focusing on high MI neurons and inhibition. Such 'localist plasticity' has high efficiency, because it requires only few adaptations for each pattern. Specifically, we store k=10 patterns of size s=400 in a 1000/1200 neuron network. We stimulate high MI neurons and in this way recall patterns, such that the whole network represents this pattern. We assess the quality of the representation (a) before learning, when entering the pattern into a naive network, (b) after learning, on the adapted network, and (c) after recall by stimulation. The recalled patterns could be easily recognized by a trained classifier. The recalled pattern 'unfolds' over the recurrent network with high similarity to the original input pattern. We discuss the distribution of neuron properties in the network, and find that an initial Gaussian distribution changes into a more heavy-tailed, lognormal distribution during the adaptation process. The remarkable result is that we are able to achieve reliable pattern recall by stimulating only high information neurons. This work provides a biologically-inspired model of cortical memory and may have interesting technical applications.
我们提出了一种具有新颖、技术上有用且生物学上逼真特性的模式记忆与检索模型。具体而言,我们将k个模式类别的n个变体(n*k个模式)输入到一个具有异质神经元的类似皮质的平衡抑制-兴奋网络中,并让模式在循环网络中传播。我们表明,我们可以将高互信息(MI)神经元识别为每个模式表示中的主要信息承载元素。我们采用了一种简单的一次性自适应(学习)过程,重点关注高MI神经元和抑制。这种“局部可塑性”具有很高的效率,因为每个模式只需要很少的适应性调整。具体来说,我们在一个1000/1200神经元的网络中存储k = 10个大小为s = 400的模式。我们刺激高MI神经元,通过这种方式回忆模式,使得整个网络表示该模式。我们评估了(a)在学习之前,将模式输入到一个未经训练的网络时的表示质量,(b)在学习之后,在经过适应性调整的网络上的表示质量,以及(c)在通过刺激进行回忆之后的表示质量。回忆出的模式可以很容易地被一个经过训练的分类器识别。回忆出的模式在循环网络上“展开”,与原始输入模式具有高度相似性。我们讨论了网络中神经元特性的分布,并发现初始的高斯分布在适应过程中会转变为一个更重尾的对数正态分布。显著的结果是,我们仅通过刺激高信息神经元就能实现可靠的模式回忆。这项工作提供了一个受生物学启发的皮质记忆模型,可能具有有趣的技术应用。