Huang Yanping, Rao Rajesh P N
Department of Computer Science and Engineering, University of Washington, Seattle, WA 98195, U.S.A.
Neural Comput. 2016 Aug;28(8):1503-26. doi: 10.1162/NECO_a_00851. Epub 2016 Jun 27.
Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.
鉴于大脑中贝叶斯计算的证据越来越多,我们展示了一个由泊松神经元组成的两层递归网络如何能够对任何隐藏马尔可夫模型执行近似贝叶斯推理和学习。下层的感觉神经元接收隐藏世界状态的噪声测量值。上层神经元通过对感觉神经元生成的输入进行贝叶斯推理,推断世界状态上的后验分布。我们展示了这样一个具有突触可塑性的神经网络如何能够实现一种类似于蒙特卡罗方法(如粒子滤波)的贝叶斯推理形式。上层神经元中的每个尖峰代表特定隐藏世界状态的一个样本。整个神经群体的尖峰活动近似于隐藏状态上的后验分布。在这个模型中,尖峰的变异性不被视为麻烦,而是被视为一个不可或缺的特征,它为推理过程中的采样提供了必要的变异性。我们展示了该网络如何使用赫布学习规则学习似然模型以及动态过程背后的转移概率。我们给出的结果说明了该网络对任意隐藏马尔可夫模型执行推理和学习的能力。