Suppr超能文献

言语感知过程中的序列视听交互作用:一项全脑磁图研究。

Sequential audiovisual interactions during speech perception: a whole-head MEG study.

作者信息

Hertrich Ingo, Mathiak Klaus, Lutzenberger Werner, Menning Hans, Ackermann Hermann

机构信息

Department of General Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Germany.

出版信息

Neuropsychologia. 2007 Mar 25;45(6):1342-54. doi: 10.1016/j.neuropsychologia.2006.09.019. Epub 2006 Oct 25.

Abstract

Using whole-head magnetoencephalography (MEG), audiovisual (AV) interactions during speech perception (/ta/- and /pa/-syllables) were investigated in 20 subjects. Congruent AV events served as the 'standards' of an oddball design. The deviants encompassed incongruent /ta/-/pa/ configurations differing from the standards either in the acoustic or the visual domain. As an auditory non-speech control condition, the same video signals were synchronized with either one of two complex tones. As in natural speech, visual movement onset preceded acoustic signals by about 150 ms. First, the impact of visual information on auditorily evoked fields to non-speech sounds was determined. Larger facial movements (/pa/ versus /ta/) yielded enhanced early responses such as the M100 component, indicating, most presumably, anticipatory pre-activation of auditory cortex by visual motion cues. As a second step of analysis, mismatch fields (MMF) were calculated. Acoustic deviants elicited a typical MMF, peaking ca. 180 ms after stimulus onset, whereas visual deviants gave rise to later responses (220 ms) of a more posterior-medial source location. Finally, a late (275 ms), left-lateralized visually-induced MMF component, resembling the acoustic mismatch response, emerged during the speech condition, presumably reflecting phonetic/linguistic operations. There is mounting functional imaging evidence for an early impact of visual information on auditory cortical regions during speech perception. The present study suggests at least two successive AV interactions in association with syllable recognition tasks: early activation of auditory areas depending upon visual motion cues and a later speech-specific left-lateralized response mediated, conceivably, by backward-projections from multisensory areas.

摘要

利用全脑磁脑图(MEG),对20名受试者在言语感知(/ta/和/pa/音节)过程中的视听(AV)交互作用进行了研究。一致的AV事件作为oddball设计的“标准”。偏差刺激包括在声学或视觉领域与标准不同的不一致的/ta//pa/配置。作为听觉非言语控制条件,相同的视频信号与两种复合音调之一同步。与自然言语一样,视觉运动开始比声学信号提前约150毫秒。首先,确定视觉信息对非言语声音诱发的听觉场的影响。更大的面部运动(/pa/与/ta/)产生增强的早期反应,如M100成分,这很可能表明视觉运动线索对听觉皮层的预期预激活。作为分析的第二步,计算失配场(MMF)。声学偏差刺激引发了典型的MMF,在刺激开始后约180毫秒达到峰值,而视觉偏差刺激则引发了更靠后内侧源位置的较晚反应(220毫秒)。最后,在言语条件下出现了一个晚期(275毫秒)、左侧化的视觉诱发MMF成分,类似于声学失配反应,这可能反映了语音/语言操作。越来越多的功能成像证据表明,在言语感知过程中,视觉信息对听觉皮层区域有早期影响。本研究表明,与音节识别任务相关的至少有两个连续的AV交互作用:依赖视觉运动线索的听觉区域早期激活,以及可能由多感觉区域的反向投射介导的后期言语特异性左侧化反应。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验