Suppr超能文献

支持视听整合的神经网络用于言语:一项大规模的病变研究。

Neural networks supporting audiovisual integration for speech: A large-scale lesion study.

机构信息

University of California, Irvine, USA.

Arizona State University, USA.

出版信息

Cortex. 2018 Jun;103:360-371. doi: 10.1016/j.cortex.2018.03.030. Epub 2018 Apr 10.

Abstract

Auditory and visual speech information are often strongly integrated resulting in perceptual enhancements for audiovisual (AV) speech over audio alone and sometimes yielding compelling illusory fusion percepts when AV cues are mismatched, the McGurk-MacDonald effect. Previous research has identified three candidate regions thought to be critical for AV speech integration: the posterior superior temporal sulcus (STS), early auditory cortex, and the posterior inferior frontal gyrus. We assess the causal involvement of these regions (and others) in the first large-scale (N = 100) lesion-based study of AV speech integration. Two primary findings emerged. First, behavioral performance and lesion maps for AV enhancement and illusory fusion measures indicate that classic metrics of AV speech integration are not necessarily measuring the same process. Second, lesions involving superior temporal auditory, lateral occipital visual, and multisensory zones in the STS are the most disruptive to AV speech integration. Further, when AV speech integration fails, the nature of the failure-auditory vs visual capture-can be predicted from the location of the lesions. These findings show that AV speech processing is supported by unimodal auditory and visual cortices as well as multimodal regions such as the STS at their boundary. Motor related frontal regions do not appear to play a role in AV speech integration.

摘要

听觉和视觉言语信息通常紧密结合,从而使视听(AV)言语在仅有听觉的情况下得到感知增强,并且当 AV 线索不匹配时,有时会产生令人信服的错觉融合感知,即麦格克-麦克唐纳效应。先前的研究已经确定了三个被认为对 AV 言语整合至关重要的候选区域:后上颞叶(STS)、早期听觉皮层和后下额回。我们评估了这些区域(和其他区域)在后上颞叶听觉、外侧枕叶视觉和多感觉区的首次大规模(N=100)基于病变的视听言语整合研究中的因果关系。主要有两个发现。首先,行为表现和视听增强以及错觉融合测量的病变图表明,经典的视听言语整合指标不一定测量相同的过程。其次,涉及 STS 中的高级听觉、外侧枕叶视觉和多感觉区的病变对 AV 言语整合的干扰最大。此外,当 AV 言语整合失败时,失败的性质——听觉还是视觉捕获——可以从病变的位置来预测。这些发现表明,AV 言语处理不仅依赖于听觉和视觉的单模态皮质,还依赖于 STS 等多模态区域。与运动相关的额区似乎在 AV 言语整合中不起作用。

相似文献

1
Neural networks supporting audiovisual integration for speech: A large-scale lesion study.
Cortex. 2018 Jun;103:360-371. doi: 10.1016/j.cortex.2018.03.030. Epub 2018 Apr 10.
3
Neural correlates of audiovisual speech processing in a second language.
Brain Lang. 2013 Sep;126(3):253-62. doi: 10.1016/j.bandl.2013.05.009. Epub 2013 Jul 18.
4
An ALE meta-analysis on the audiovisual integration of speech signals.
Hum Brain Mapp. 2014 Nov;35(11):5587-605. doi: 10.1002/hbm.22572. Epub 2014 Jul 4.
5
The Motor Network Reduces Multisensory Illusory Perception.
J Neurosci. 2018 Nov 7;38(45):9679-9688. doi: 10.1523/JNEUROSCI.3650-17.2018. Epub 2018 Sep 24.
6
A Causal Inference Model Explains Perception of the McGurk Effect and Other Incongruent Audiovisual Speech.
PLoS Comput Biol. 2017 Feb 16;13(2):e1005229. doi: 10.1371/journal.pcbi.1005229. eCollection 2017 Feb.
7
Neural Mechanisms Underlying Cross-Modal Phonetic Encoding.
J Neurosci. 2018 Feb 14;38(7):1835-1849. doi: 10.1523/JNEUROSCI.1566-17.2017. Epub 2017 Dec 20.
8
Increased Connectivity among Sensory and Motor Regions during Visual and Audiovisual Speech Perception.
J Neurosci. 2022 Jan 19;42(3):435-442. doi: 10.1523/JNEUROSCI.0114-21.2021. Epub 2021 Nov 23.
10
Cross-modal interactions during perception of audiovisual speech and nonspeech signals: an fMRI study.
J Cogn Neurosci. 2011 Jan;23(1):221-37. doi: 10.1162/jocn.2010.21421.

引用本文的文献

1
Seeing speech: Neural mechanisms of cued speech perception in prelingually deaf and hearing users.
Imaging Neurosci (Camb). 2025 Jun 24;3. doi: 10.1162/IMAG.a.53. eCollection 2025.
4
Resting-state functional connectivity changes following audio-tactile speech training.
Front Neurosci. 2025 Apr 29;19:1482828. doi: 10.3389/fnins.2025.1482828. eCollection 2025.
7
8
Age-Related Changes to Multisensory Integration and Audiovisual Speech Perception.
Brain Sci. 2023 Jul 25;13(8):1126. doi: 10.3390/brainsci13081126.
9
Benefit of visual speech information for word comprehension in post-stroke aphasia.
Cortex. 2023 Aug;165:86-100. doi: 10.1016/j.cortex.2023.04.011. Epub 2023 May 16.
10
Effect of Bilateral Opercular Syndrome on Speech Perception.
Neurobiol Lang (Camb). 2021 Jul 13;2(3):335-353. doi: 10.1162/nol_a_00037. eCollection 2021.

本文引用的文献

1
Brain regions essential for word comprehension: Drawing inferences from patients.
Ann Neurol. 2017 Jun;81(6):759-768. doi: 10.1002/ana.24941. Epub 2017 Jun 2.
2
Auditory, Visual and Audiovisual Speech Processing Streams in Superior Temporal Sulcus.
Front Hum Neurosci. 2017 Apr 7;11:174. doi: 10.3389/fnhum.2017.00174. eCollection 2017.
3
Audiovisual sentence recognition not predicted by susceptibility to the McGurk effect.
Atten Percept Psychophys. 2017 Feb;79(2):396-403. doi: 10.3758/s13414-016-1238-9.
4
Audiovisual integration of speech in a patient with Broca's Aphasia.
Front Psychol. 2015 Apr 28;6:435. doi: 10.3389/fpsyg.2015.00435. eCollection 2015.
5
6
Neural pathways for visual speech perception.
Front Neurosci. 2014 Dec 1;8:386. doi: 10.3389/fnins.2014.00386. eCollection 2014.
8
An fMRI Study of Audiovisual Speech Perception Reveals Multisensory Interactions in Auditory Cortex.
PLoS One. 2013 Jun 21;8(6):e68959. doi: 10.1371/journal.pone.0068959. Print 2013.
9
Response Bias Modulates the Speech Motor System during Syllable Discrimination.
Front Psychol. 2012 May 28;3:157. doi: 10.3389/fpsyg.2012.00157. eCollection 2012.
10
A review and synthesis of the first 20 years of PET and fMRI studies of heard speech, spoken language and reading.
Neuroimage. 2012 Aug 15;62(2):816-47. doi: 10.1016/j.neuroimage.2012.04.062. Epub 2012 May 12.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验