Batson Melissa A, Beer Anton L, Seitz Aaron R, Watanabe Takeo
Boston University, Boston, MA 02215, USA.
Seeing Perceiving. 2011;24(6):579-94. doi: 10.1163/187847611X603738.
A large proportion of the human cortex is devoted to visual processing. Contrary to the traditional belief that multimodal integration takes place in multimodal processing areas separate from visual cortex, several studies have found that sounds may directly alter processing in visual brain areas. Furthermore, recent findings show that perceptual learning can change the perceptual mechanisms that relate auditory and visual senses. However, there is still a debate about the systems involved in cross-modal learning. Here, we investigated the specificity of audio-visual perceptual learning. Audio-visual cuing effects were tested on a Gabor orientation task and an object discrimination task in the presence of lateralised sound cues before and after eight-days of cross-modal task-irrelevant perceptual learning. During training, the sound cues were paired with visual stimuli that were misaligned at a proximal (trained) visual field location relative to the sound. Training was performed with one eye patched and with only one Gabor orientation. Consistent with previous findings we found that cross-modal perceptual training shifted the audio-visual cueing effect towards the trained retinotopic location. However, this shift in audio-visual tuning was only observed for the trained stimulus (Gabors), at the trained orientation, and in the trained eye. This specificity suggests that multimodal interactions resulting from cross-modal (audio-visual) task-irrelevant perceptual learning involves so-called unisensory visual processing areas in humans. Our findings provide further support for recent anatomical and physiological findings that suggest relatively early interactions in cross-modal processing.
人类大脑皮层的很大一部分用于视觉处理。与传统观点认为多模态整合发生在与视觉皮层分离的多模态处理区域不同,多项研究发现声音可能直接改变视觉脑区的处理过程。此外,最近的研究结果表明,知觉学习可以改变关联听觉和视觉的知觉机制。然而,关于跨模态学习所涉及的系统仍存在争议。在此,我们研究了视听知觉学习的特异性。在进行为期八天的与跨模态任务无关的知觉学习前后,在存在侧向声音线索的情况下,对Gabor方向任务和物体辨别任务测试视听提示效应。在训练过程中,声音线索与在相对于声音的近端(训练过的)视野位置未对齐的视觉刺激配对。训练时一只眼睛被遮盖,且只使用一种Gabor方向。与之前的研究结果一致,我们发现跨模态知觉训练使视听提示效应朝着训练过的视网膜定位位置偏移。然而,这种视听调谐的偏移仅在训练过的刺激(Gabor图形)、训练过的方向以及训练过的眼睛中观察到。这种特异性表明,由跨模态(视听)任务无关的知觉学习产生的多模态相互作用涉及人类所谓的单感觉视觉处理区域。我们的研究结果为最近的解剖学和生理学研究结果提供了进一步支持,这些结果表明跨模态处理中存在相对早期的相互作用。