Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, United States.
Carney Institute for Brain Science, Brown University, Providence, United States.
Elife. 2021 Apr 6;10:e65074. doi: 10.7554/eLife.65074.
In cognitive neuroscience, computational modeling can formally adjudicate between theories and affords quantitative fits to behavioral/brain data. Pragmatically, however, the space of plausible generative models considered is dramatically limited by the set of models with known likelihood functions. For many models, the lack of a closed-form likelihood typically impedes Bayesian inference methods. As a result, standard models are evaluated for convenience, even when other models might be superior. Likelihood-free methods exist but are limited by their computational cost or their restriction to particular inference scenarios. Here, we propose neural networks that learn approximate likelihoods for arbitrary generative models, allowing fast posterior sampling with only a one-off cost for model simulations that is amortized for future inference. We show that these methods can accurately recover posterior parameter distributions for a variety of neurocognitive process models. We provide code allowing users to deploy these methods for arbitrary hierarchical model instantiations without further training.
在认知神经科学中,计算建模可以对理论进行正式裁决,并对行为/大脑数据进行定量拟合。然而,从实用的角度来看,可考虑的生成模型空间受到了具有已知似然函数的模型集合的极大限制。对于许多模型来说,缺乏闭式似然函数通常会阻碍贝叶斯推理方法。因此,即使其他模型可能更优,也为了方便而评估标准模型。虽然存在无迹似然方法,但它们受到计算成本或对特定推理场景的限制。在这里,我们提出了神经网络,可以为任意生成模型学习近似似然,仅需一次性模拟模型成本即可进行快速后验抽样,并且该成本可以在未来的推断中摊销。我们表明,这些方法可以准确地恢复各种神经认知过程模型的后验参数分布。我们提供了代码,允许用户在不进行进一步训练的情况下,将这些方法用于任意层次模型实例化。