Yu Zhaofei, Chen Feng, Deng Fei
IEEE Trans Neural Netw Learn Syst. 2018 Nov;29(11):5761-5766. doi: 10.1109/TNNLS.2018.2805813. Epub 2018 Mar 9.
Numerous experimental data show that human brain can represent probability distributions and perform Bayesian inference. However, it remains unclear how the brain implements probabilistic inference in the form of neural circuits. Several models have been proposed that aim at explaining how the network of neurons carry out maximum a posterior inference (MAP) estimation and marginal inference, but they are all task specific in that they treat MAP estimation and marginal inference separately. In this brief, we propose that human brain could implement MAP estimation and marginal inference in the same network of neurons. We illustrate our result in hidden Markov models and prove that a recurrent neural network (RNN) implementation of belief propagation can be tuned to perform approximate Bayesian inference (to provide posterior or conditional distribution over the latent causes of observations) or identify the MAP or peak of the joint distribution. The key tuning parameter is a temperature parameter that controls the precision of probability distributions that are optimized. Theoretical analyses and experimental results demonstrate that RNNs can carry out near-optimal MAP estimation and marginal inference.
大量实验数据表明,人类大脑能够表征概率分布并进行贝叶斯推理。然而,大脑如何以神经回路的形式实现概率推理仍不清楚。已经提出了几种模型,旨在解释神经元网络如何进行最大后验推理(MAP)估计和边际推理,但它们都是特定于任务的,因为它们分别处理MAP估计和边际推理。在本简报中,我们提出人类大脑可以在同一神经元网络中实现MAP估计和边际推理。我们在隐马尔可夫模型中说明了我们的结果,并证明信念传播的递归神经网络(RNN)实现可以进行调整,以执行近似贝叶斯推理(以提供观测潜在原因的后验或条件分布)或识别联合分布的MAP或峰值。关键的调整参数是一个温度参数,它控制着优化后的概率分布的精度。理论分析和实验结果表明,RNN可以进行接近最优的MAP估计和边际推理。