Suppr超能文献

通过知觉学习实现的视听交互的空间转移特定于训练的方向和眼睛。

Spatial shifts of audio-visual interactions by perceptual learning are specific to the trained orientation and eye.

作者信息

Batson Melissa A, Beer Anton L, Seitz Aaron R, Watanabe Takeo

机构信息

Boston University, Boston, MA 02215, USA.

出版信息

Seeing Perceiving. 2011;24(6):579-94. doi: 10.1163/187847611X603738.

Abstract

A large proportion of the human cortex is devoted to visual processing. Contrary to the traditional belief that multimodal integration takes place in multimodal processing areas separate from visual cortex, several studies have found that sounds may directly alter processing in visual brain areas. Furthermore, recent findings show that perceptual learning can change the perceptual mechanisms that relate auditory and visual senses. However, there is still a debate about the systems involved in cross-modal learning. Here, we investigated the specificity of audio-visual perceptual learning. Audio-visual cuing effects were tested on a Gabor orientation task and an object discrimination task in the presence of lateralised sound cues before and after eight-days of cross-modal task-irrelevant perceptual learning. During training, the sound cues were paired with visual stimuli that were misaligned at a proximal (trained) visual field location relative to the sound. Training was performed with one eye patched and with only one Gabor orientation. Consistent with previous findings we found that cross-modal perceptual training shifted the audio-visual cueing effect towards the trained retinotopic location. However, this shift in audio-visual tuning was only observed for the trained stimulus (Gabors), at the trained orientation, and in the trained eye. This specificity suggests that multimodal interactions resulting from cross-modal (audio-visual) task-irrelevant perceptual learning involves so-called unisensory visual processing areas in humans. Our findings provide further support for recent anatomical and physiological findings that suggest relatively early interactions in cross-modal processing.

摘要

人类大脑皮层的很大一部分用于视觉处理。与传统观点认为多模态整合发生在与视觉皮层分离的多模态处理区域不同,多项研究发现声音可能直接改变视觉脑区的处理过程。此外,最近的研究结果表明,知觉学习可以改变关联听觉和视觉的知觉机制。然而,关于跨模态学习所涉及的系统仍存在争议。在此,我们研究了视听知觉学习的特异性。在进行为期八天的与跨模态任务无关的知觉学习前后,在存在侧向声音线索的情况下,对Gabor方向任务和物体辨别任务测试视听提示效应。在训练过程中,声音线索与在相对于声音的近端(训练过的)视野位置未对齐的视觉刺激配对。训练时一只眼睛被遮盖,且只使用一种Gabor方向。与之前的研究结果一致,我们发现跨模态知觉训练使视听提示效应朝着训练过的视网膜定位位置偏移。然而,这种视听调谐的偏移仅在训练过的刺激(Gabor图形)、训练过的方向以及训练过的眼睛中观察到。这种特异性表明,由跨模态(视听)任务无关的知觉学习产生的多模态相互作用涉及人类所谓的单感觉视觉处理区域。我们的研究结果为最近的解剖学和生理学研究结果提供了进一步支持,这些结果表明跨模态处理中存在相对早期的相互作用。

相似文献

9
Perceptual load interacts with stimulus processing across sensory modalities.知觉负载与跨感觉通道的刺激处理相互作用。
Eur J Neurosci. 2009 Jun;29(12):2426-34. doi: 10.1111/j.1460-9568.2009.06774.x. Epub 2009 May 26.

引用本文的文献

6
Towards a whole brain model of Perceptual Learning.迈向感知学习的全脑模型。
Curr Opin Behav Sci. 2018 Apr;20:47-55. doi: 10.1016/j.cobeha.2017.10.004. Epub 2017 Dec 13.
7
Numerosity representation is encoded in human subcortex.数量表示在人类下皮层中被编码。
Proc Natl Acad Sci U S A. 2017 Apr 4;114(14):E2806-E2815. doi: 10.1073/pnas.1613982114. Epub 2017 Mar 20.
8
Audiovisual crossmodal cuing effects in front and rear space.前后空间中的视听跨模态提示效应。
Front Psychol. 2015 Jul 30;6:1086. doi: 10.3389/fpsyg.2015.01086. eCollection 2015.
9
On the evolution of conscious attention.论意识注意力的演变。
Psychon Bull Rev. 2015 Jun;22(3):595-613. doi: 10.3758/s13423-014-0718-y.

本文引用的文献

6
The phenomenon of task-irrelevant perceptual learning.任务无关的知觉学习现象。
Vision Res. 2009 Oct;49(21):2604-10. doi: 10.1016/j.visres.2009.08.003. Epub 2009 Aug 7.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验