Suppr超能文献

一种用于建模结构学习的主动推理方法:以概念学习为例。

An Active Inference Approach to Modeling Structure Learning: Concept Learning as an Example Case.

作者信息

Smith Ryan, Schwartenbeck Philipp, Parr Thomas, Friston Karl J

机构信息

Laureate Institute for Brain Research, Tulsa, OK, United States.

Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London, London, United Kingdom.

出版信息

Front Comput Neurosci. 2020 May 19;14:41. doi: 10.3389/fncom.2020.00041. eCollection 2020.

Abstract

Within computational neuroscience, the algorithmic and neural basis of structure learning remains poorly understood. Concept learning is one primary example, which requires both a type of internal model expansion process (adding novel hidden states that explain new observations), and a model reduction process (merging different states into one underlying cause and thus reducing model complexity via meta-learning). Although various algorithmic models of concept learning have been proposed within machine learning and cognitive science, many are limited to various degrees by an inability to generalize, the need for very large amounts of training data, and/or insufficiently established biological plausibility. Using concept learning as an example case, we introduce a novel approach for modeling structure learning-and specifically state-space expansion and reduction-within the active inference framework and its accompanying neural process theory. Our aim is to demonstrate its potential to facilitate a novel line of active inference research in this area. The approach we lay out is based on the idea that a generative model can be equipped with extra (hidden state or cause) "slots" that can be engaged when an agent learns about novel concepts. This can be combined with a Bayesian model reduction process, in which any concept learning-associated with these slots-can be reset in favor of a simpler model with higher model evidence. We use simulations to illustrate this model's ability to add new concepts to its state space (with relatively few observations) and increase the granularity of the concepts it currently possesses. We also simulate the predicted neural basis of these processes. We further show that it can accomplish a simple form of "one-shot" generalization to new stimuli. Although deliberately simple, these simulation results highlight ways in which active inference could offer useful resources in developing neurocomputational models of structure learning. They provide a template for how future active inference research could apply this approach to real-world structure learning problems and assess the added utility it may offer.

摘要

在计算神经科学领域,结构学习的算法和神经基础仍未得到很好的理解。概念学习就是一个主要例子,它既需要一种内部模型扩展过程(添加能够解释新观察结果的新隐藏状态),也需要一种模型简化过程(将不同状态合并为一个潜在原因,从而通过元学习降低模型复杂性)。尽管机器学习和认知科学领域已经提出了各种概念学习的算法模型,但许多模型在不同程度上受到泛化能力不足、需要大量训练数据和/或生物学合理性尚未充分确立的限制。以概念学习为例,我们在主动推理框架及其伴随的神经过程理论中引入了一种用于对结构学习(特别是状态空间扩展和简化)进行建模的新方法。我们的目的是展示其在促进该领域新的主动推理研究方面的潜力。我们提出的方法基于这样一种观点,即生成模型可以配备额外的(隐藏状态或原因)“插槽”,当智能体学习新概念时可以启用这些插槽。这可以与贝叶斯模型简化过程相结合,在该过程中,与这些插槽相关的任何概念学习都可以被重置,以支持具有更高模型证据的更简单模型。我们通过模拟来说明该模型在其状态空间中添加新概念(只需相对较少的观察结果)以及增加其当前所拥有概念的粒度的能力。我们还模拟了这些过程的预测神经基础。我们进一步表明,它可以完成对新刺激的一种简单形式的“一次性”泛化。尽管这些模拟结果故意做得很简单,但它们突出了主动推理在开发结构学习的神经计算模型方面可以提供有用资源的方式。它们为未来的主动推理研究如何将这种方法应用于现实世界的结构学习问题并评估其可能提供的额外效用提供了一个模板。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f6d/7250191/391de8157b3f/fncom-14-00041-g0001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验