Yildirim Ilker, Jacobs Robert A
Brain and Cognitive Sciences, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA, 02139, USA,
Psychon Bull Rev. 2015 Jun;22(3):673-86. doi: 10.3758/s13423-014-0734-y. Epub 2014 Oct 23.
If a person is trained to recognize or categorize objects or events using one sensory modality, the person can often recognize or categorize those same (or similar) objects and events via a novel modality. This phenomenon is an instance of cross-modal transfer of knowledge. Here, we study the Multisensory Hypothesis which states that people extract the intrinsic, modality-independent properties of objects and events, and represent these properties in multisensory representations. These representations underlie cross-modal transfer of knowledge. We conducted an experiment evaluating whether people transfer sequence category knowledge across auditory and visual domains. Our experimental data clearly indicate that we do. We also developed a computational model accounting for our experimental results. Consistent with the probabilistic language of thought approach to cognitive modeling, our model formalizes multisensory representations as symbolic "computer programs" and uses Bayesian inference to learn these representations. Because the model demonstrates how the acquisition and use of amodal, multisensory representations can underlie cross-modal transfer of knowledge, and because the model accounts for subjects' experimental performances, our work lends credence to the Multisensory Hypothesis. Overall, our work suggests that people automatically extract and represent objects' and events' intrinsic properties, and use these properties to process and understand the same (and similar) objects and events when they are perceived through novel sensory modalities.
如果一个人经过训练,使用一种感官模态来识别或分类物体或事件,那么这个人通常能够通过一种新的模态来识别或分类那些相同(或相似)的物体和事件。这种现象是知识跨模态转移的一个实例。在这里,我们研究多感官假说,该假说认为人们提取物体和事件的内在的、与模态无关的属性,并在多感官表征中表征这些属性。这些表征是知识跨模态转移的基础。我们进行了一项实验,评估人们是否能在听觉和视觉领域之间转移序列类别知识。我们的实验数据清楚地表明我们确实可以。我们还开发了一个计算模型来解释我们的实验结果。与认知建模的概率性思维语言方法一致,我们的模型将多感官表征形式化为符号“计算机程序”,并使用贝叶斯推理来学习这些表征。因为该模型展示了无模态多感官表征的获取和使用如何能够成为知识跨模态转移的基础,并且因为该模型解释了受试者的实验表现,所以我们的工作为多感官假说提供了可信度。总体而言,我们的工作表明人们会自动提取并表征物体和事件的内在属性,并在通过新的感官模态感知相同(和相似)的物体和事件时,利用这些属性来进行处理和理解。