Frermann Lea, Lapata Mirella
Institute for Language, Cognition and Computation, School of Informatics, University of Edinburgh.
Cogn Sci. 2016 Aug;40(6):1333-81. doi: 10.1111/cogs.12304. Epub 2015 Nov 2.
Models of category learning have been extensively studied in cognitive science and primarily tested on perceptual abstractions or artificial stimuli. In this paper, we focus on categories acquired from natural language stimuli, that is, words (e.g., chair is a member of the furniture category). We present a Bayesian model that, unlike previous work, learns both categories and their features in a single process. We model category induction as two interrelated subproblems: (a) the acquisition of features that discriminate among categories, and (b) the grouping of concepts into categories based on those features. Our model learns categories incrementally using particle filters, a sequential Monte Carlo method commonly used for approximate probabilistic inference that sequentially integrates newly observed data and can be viewed as a plausible mechanism for human learning. Experimental results show that our incremental learner obtains meaningful categories which yield a closer fit to behavioral data compared to related models while at the same time acquiring features which characterize the learned categories. (An earlier version of this work was published in Frermann and Lapata .).
类别学习模型在认知科学中已得到广泛研究,主要在知觉抽象或人工刺激上进行测试。在本文中,我们关注从自然语言刺激(即单词)中获取的类别(例如,椅子是家具类别的一员)。我们提出了一种贝叶斯模型,与之前的工作不同,该模型在单个过程中学习类别及其特征。我们将类别归纳建模为两个相互关联的子问题:(a)获取区分不同类别的特征,以及(b)基于这些特征将概念分组为类别。我们的模型使用粒子滤波器增量式地学习类别,粒子滤波器是一种常用于近似概率推理的序贯蒙特卡罗方法,它顺序整合新观察到的数据,并且可以被视为人类学习的一种合理机制。实验结果表明,与相关模型相比,我们的增量式学习者获得了有意义的类别,这些类别与行为数据的拟合度更高,同时还获取了表征所学类别的特征。(这项工作的早期版本发表在弗雷曼和拉帕塔的论文中。)