Orhan A Emin, Ma Wei Ji
Center for Neural Science, New York University, New York, NY, 10003, USA.
Department of Psychology, New York University, New York, NY, 10003, USA.
Nat Commun. 2017 Jul 26;8(1):138. doi: 10.1038/s41467-017-00181-8.
Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks. In a probabilistic categorization task, error-based learning in a generic network simultaneously explains a monkey's learning curve and the evolution of qualitative aspects of its choice behavior. In all tasks, the number of neurons required for a given level of performance grows sublinearly with the input population size, a substantial improvement on previous implementations of probabilistic inference. The trained networks develop a novel sparsity-based probabilistic population code. Our results suggest that probabilistic inference emerges naturally in generic neural networks trained with error-based learning rules.Behavioural tasks often require probability distributions to be inferred about task specific variables. Here, the authors demonstrate that generic neural networks can be trained using a simple error-based learning rule to perform such probabilistic computations efficiently without any need for task specific operations.
动物在广泛的心理物理学任务中执行接近最优的概率推理。概率推理需要对与任务变量相关的不确定性进行逐次试验的表征,并随后使用这种表征。先前的工作已经使用具有手工制作且依赖于任务的操作的神经网络来实现此类计算。我们表明,用简单的基于误差的学习规则训练的通用神经网络在九个常见的心理物理学任务中执行接近最优的概率推理。在一个概率分类任务中,通用网络中基于误差的学习同时解释了猴子的学习曲线及其选择行为定性方面的演变。在所有任务中,对于给定性能水平所需的神经元数量随输入群体规模呈亚线性增长,这比概率推理的先前实现有了实质性改进。训练后的网络开发出一种基于稀疏性的新型概率群体编码。我们的结果表明,概率推理在用基于误差的学习规则训练的通用神经网络中自然出现。行为任务通常需要推断关于任务特定变量的概率分布。在这里,作者证明了通用神经网络可以使用简单的基于误差的学习规则进行训练,以有效地执行此类概率计算,而无需任何特定于任务的操作。