Indiana University Department of Physics, Bloomington, USA.
BMC Neurosci. 2010 Jan 6;11:3. doi: 10.1186/1471-2202-11-3.
How living neural networks retain information is still incompletely understood. Two prominent ideas on this topic have developed in parallel, but have remained somewhat unconnected. The first of these, the "synaptic hypothesis," holds that information can be retained in synaptic connection strengths, or weights, between neurons. Recent work inspired by statistical mechanics has suggested that networks will retain the most information when their weights are distributed in a skewed manner, with many weak weights and only a few strong ones. The second of these ideas is that information can be represented by stable activity patterns. Multineuron recordings have shown that sequences of neural activity distributed over many neurons are repeated above chance levels when animals perform well-learned tasks. Although these two ideas are compelling, no one to our knowledge has yet linked the predicted optimum distribution of weights to stable activity patterns actually observed in living neural networks.
Here, we explore this link by comparing stable activity patterns from cortical slice networks recorded with multielectrode arrays to stable patterns produced by a model with a tunable weight distribution. This model was previously shown to capture central features of the dynamics in these slice networks, including neuronal avalanche cascades. We find that when the model weight distribution is appropriately skewed, it correctly matches the distribution of repeating patterns observed in the data. In addition, this same distribution of weights maximizes the capacity of the network model to retain stable activity patterns. Thus, the distribution that best fits the data is also the distribution that maximizes the number of stable patterns.
We conclude that local cortical networks are very likely to use a highly skewed weight distribution to optimize information retention, as predicted by theory. Fixed distributions impose constraints on learning, however. The network must have mechanisms for preserving the overall weight distribution while allowing individual connection strengths to change with learning.
活神经网络如何保留信息仍不完全清楚。关于这个主题有两个并行发展的突出观点,但它们之间有些脱节。其中第一个观点,即“突触假说”,认为信息可以保留在神经元之间的突触连接强度或权重中。受统计力学启发的最近研究表明,当网络的权重呈偏态分布时,网络将保留最多的信息,即具有许多弱权重和少数强权重。这些观点中的第二个观点是,信息可以通过稳定的活动模式来表示。多神经元记录表明,当动物在经过良好学习的任务中表现良好时,分布在许多神经元上的神经活动序列会以高于随机水平的频率重复出现。尽管这两个观点很有说服力,但据我们所知,还没有人将预测的最佳权重分布与活神经网络中实际观察到的稳定活动模式联系起来。
在这里,我们通过将多电极阵列记录的皮质切片网络的稳定活动模式与具有可调权重分布的模型产生的稳定模式进行比较来探索这种联系。该模型先前被证明可以捕获这些切片网络中动力学的核心特征,包括神经元级联。我们发现,当模型权重分布适当偏斜时,它可以正确匹配数据中观察到的重复模式分布。此外,这种相同的权重分布可以最大限度地提高网络模型保留稳定活动模式的能力。因此,与数据拟合最好的分布也是使稳定模式数量最大化的分布。
我们的结论是,局部皮质网络很可能使用高度偏态的权重分布来优化信息保留,这与理论预测一致。然而,固定分布会对学习造成限制。网络必须具有在允许个别连接强度随学习而变化的同时保持整体权重分布的机制。