Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, USA.
Department of Bioengineering, Imperial College London, London, UK.
Sci Rep. 2023 Aug 9;13(1):12939. doi: 10.1038/s41598-023-39108-3.
The statistical structure of the environment is often important when making decisions. There are multiple theories of how the brain represents statistical structure. One such theory states that neural activity spontaneously samples from probability distributions. In other words, the network spends more time in states which encode high-probability stimuli. Starting from the neural assembly, increasingly thought of to be the building block for computation in the brain, we focus on how arbitrary prior knowledge about the external world can both be learned and spontaneously recollected. We present a model based upon learning the inverse of the cumulative distribution function. Learning is entirely unsupervised using biophysical neurons and biologically plausible learning rules. We show how this prior knowledge can then be accessed to compute expectations and signal surprise in downstream networks. Sensory history effects emerge from the model as a consequence of ongoing learning.
环境的统计结构在决策时通常很重要。大脑如何表示统计结构有多种理论。其中一种理论指出,神经活动会自发地从概率分布中抽样。换句话说,网络会在编码高概率刺激的状态下花费更多时间。从越来越被认为是大脑计算基础的神经集合开始,我们专注于如何学习和自发回忆关于外部世界的任意先验知识。我们提出了一个基于学习累积分布函数逆的模型。使用生物物理神经元和合理的学习规则进行完全无监督的学习。我们展示了如何访问这些先验知识,以便在下游网络中计算期望和信号惊喜。作为持续学习的结果,模型中出现了感觉历史效应。