Gershman Samuel J
Department of Psychology and Center for Brain Science, Harvard University, Cambridge, MA, United States.
Front Artif Intell. 2019 Sep 18;2:18. doi: 10.3389/frai.2019.00018. eCollection 2019.
The idea that the brain learns generative models of the world has been widely promulgated. Most approaches have assumed that the brain learns an explicit density model that assigns a probability to each possible state of the world. However, explicit density models are difficult to learn, requiring approximate inference techniques that may find poor solutions. An alternative approach is to learn an implicit density model that can sample from the generative model without evaluating the probabilities of those samples. The implicit model can be trained to fool a discriminator into believing that the samples are real. This is the idea behind generative adversarial algorithms, which have proven adept at learning realistic generative models. This paper develops an adversarial framework for probabilistic computation in the brain. It first considers how generative adversarial algorithms overcome some of the problems that vex prior theories based on explicit density models. It then discusses the psychological and neural evidence for this framework, as well as how the breakdown of the generator and discriminator could lead to delusions observed in some mental disorders.
大脑学习世界生成模型的观点已被广泛传播。大多数方法都假定大脑学习一个明确的密度模型,该模型为世界的每个可能状态赋予一个概率。然而,明确的密度模型很难学习,需要近似推理技术,而这些技术可能会找到较差的解决方案。另一种方法是学习一个隐式密度模型,该模型可以从生成模型中采样,而无需评估这些样本的概率。可以训练隐式模型来欺骗鉴别器,使其相信样本是真实的。这就是生成对抗算法背后的理念,事实证明,这些算法擅长学习逼真的生成模型。本文为大脑中的概率计算开发了一个对抗框架。它首先考虑生成对抗算法如何克服困扰基于明确密度模型的先前理论的一些问题。然后讨论了支持该框架的心理学和神经学证据,以及生成器和鉴别器的故障如何导致在某些精神障碍中观察到的妄想。