Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands.
Neuroimage. 2011 Aug 15;57(4):1601-7. doi: 10.1016/j.neuroimage.2011.05.043. Epub 2011 May 25.
In modern perceptual neuroscience, the focus of interest has shifted from a restriction to individual modalities to an acknowledgement of the importance of multisensory processing. One particularly well-known example of cross-modal interaction is the McGurk illusion. It has been shown that this illusion can be modified, such that it creates an auditory perceptual bias that lasts beyond the duration of audiovisual stimulation, a process referred to as cross-modal recalibration (Bertelson et al., 2003). Recently, we have suggested that this perceptual bias is stored in auditory cortex, by demonstrating the feasibility of retrieving the subjective perceptual interpretation of recalibrated ambiguous phonemes from functional magnetic resonance imaging (fMRI) measurements in these regions (Kilian-Hütten et al., 2011). However, this does not explain which brain areas integrate the information from the two senses and represent the origin of the auditory perceptual bias. Here we analyzed fMRI data from audiovisual recalibration blocks, utilizing behavioral data from perceptual classifications of ambiguous auditory phonemes that followed these blocks later in time. Adhering to this logic, we could identify a network of brain areas (bilateral inferior parietal lobe [IPL], inferior frontal sulcus [IFS], and posterior middle temporal gyrus [MTG]), whose activation during audiovisual exposure anticipated auditory perceptual tendencies later in time. We propose a model in which a higher-order network, including IPL and IFS, accommodates audiovisual integrative learning processes, which are responsible for the installation of a perceptual bias in auditory regions. This bias then determines constructive perceptual processing.
在现代感知神经科学中,研究兴趣已经从对单一模态的限制转移到承认多模态处理的重要性。跨模态相互作用的一个特别著名的例子是麦格克效应。已经表明,这种错觉可以被修改,从而产生持续超过视听刺激持续时间的听觉感知偏差,这个过程被称为跨模态重新校准(Bertelson 等人,2003)。最近,我们通过证明从这些区域的功能磁共振成像(fMRI)测量中检索重新校准的歧义语音的主观感知解释的可行性,表明这种感知偏差存储在听觉皮层中(Kilian-Hütten 等人,2011)。然而,这并不能解释哪些大脑区域整合了来自两种感觉的信息并代表听觉感知偏差的起源。在这里,我们分析了视听重新校准块的 fMRI 数据,利用随后在这些块之后的时间进行的歧义听觉语音的感知分类的行为数据。遵循这一逻辑,我们可以识别出一个大脑区域网络(双侧下顶叶[IPL]、下额前回[IFS]和后颞中回[MTG]),其在视听暴露期间的激活预测了以后时间的听觉感知趋势。我们提出了一个模型,其中一个高阶网络,包括 IPL 和 IFS,适应视听整合学习过程,负责在听觉区域中安装感知偏差。然后,这种偏差决定了建设性的感知处理。