Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47404, USA.
Top Cogn Sci. 2012 Jan;4(1):103-20. doi: 10.1111/j.1756-8765.2011.01176.x.
The literature contains a disconnect between accounts of how humans learn lexical semantic representations for words. Theories generally propose that lexical semantics are learned either through perceptual experience or through exposure to regularities in language. We propose here a model to integrate these two information sources. Specifically, the model uses the global structure of memory to exploit the redundancy between language and perception in order to generate inferred perceptual representations for words with which the model has no perceptual experience. We test the model on a variety of different datasets from grounded cognition experiments and demonstrate that this diverse set of results can be explained as perceptual simulation (cf. Barsalou, Simmons, Barbey, & Wilson, 2003) within a global memory model.
文献中存在一种脱节,即人们对人类如何为单词学习词汇语义表示的描述。理论通常提出,词汇语义是通过感知经验或通过接触语言中的规律来学习的。我们在这里提出了一个模型来整合这两个信息源。具体来说,该模型利用记忆的全局结构来利用语言和感知之间的冗余,以便为模型没有感知经验的单词生成推断出的感知表示。我们在来自基于情境认知实验的各种不同数据集上测试了该模型,并证明可以在全局记忆模型内将这组不同的结果解释为感知模拟(参见 Barsalou、Simmons、Barbey 和 Wilson,2003)。