Suppr超能文献

在感知嘈杂视听语音期间,上颞沟连接的动态变化。

Dynamic changes in superior temporal sulcus connectivity during perception of noisy audiovisual speech.

机构信息

Department of Neurobiology, University of Texas Medical School at Houston, Houston, Texas 77030, USA.

出版信息

J Neurosci. 2011 Feb 2;31(5):1704-14. doi: 10.1523/JNEUROSCI.4853-10.2011.

Abstract

Humans are remarkably adept at understanding speech, even when it is contaminated by noise. Multisensory integration may explain some of this ability: combining independent information from the auditory modality (vocalizations) and the visual modality (mouth movements) reduces noise and increases accuracy. Converging evidence suggests that the superior temporal sulcus (STS) is a critical brain area for multisensory integration, but little is known about its role in the perception of noisy speech. Behavioral studies have shown that perceptual judgments are weighted by the reliability of the sensory modality: more reliable modalities are weighted more strongly, even if the reliability changes rapidly. We hypothesized that changes in the functional connectivity of STS with auditory and visual cortex could provide a neural mechanism for perceptual reliability weighting. To test this idea, we performed five blood oxygenation level-dependent functional magnetic resonance imaging and behavioral experiments in 34 healthy subjects. We found increased functional connectivity between the STS and auditory cortex when the auditory modality was more reliable (less noisy) and increased functional connectivity between the STS and visual cortex when the visual modality was more reliable, even when the reliability changed rapidly during presentation of successive words. This finding matched the results of a behavioral experiment in which the perception of incongruent audiovisual syllables was biased toward the more reliable modality, even with rapidly changing reliability. Changes in STS functional connectivity may be an important neural mechanism underlying the perception of noisy speech.

摘要

人类非常擅长理解言语,即使言语受到噪声的干扰。多感觉整合可能解释了这种能力的一部分:将听觉模式(发声)和视觉模式(口部运动)的独立信息结合起来可以减少噪声并提高准确性。越来越多的证据表明,颞上沟(STS)是多感觉整合的关键大脑区域,但对于其在嘈杂语音感知中的作用知之甚少。行为研究表明,感知判断受到感觉模式可靠性的影响:更可靠的模式权重更大,即使可靠性变化迅速。我们假设 STS 与听觉和视觉皮层之间的功能连接变化可以为感知可靠性加权提供神经机制。为了验证这一想法,我们在 34 名健康受试者中进行了五次血氧水平依赖功能磁共振成像和行为实验。当听觉模式更可靠(噪声更小)时,我们发现 STS 与听觉皮层之间的功能连接增加,而当视觉模式更可靠时,STS 与视觉皮层之间的功能连接增加,即使在呈现连续单词时可靠性迅速变化。这一发现与一项行为实验的结果相匹配,即在可靠性迅速变化的情况下,视听音节的不一致感知偏向更可靠的模式。STS 功能连接的变化可能是嘈杂语音感知的重要神经机制。

相似文献

引用本文的文献

6
Alterations of Audiovisual Integration in Alzheimer's Disease.阿尔茨海默病中的视听整合改变。
Neurosci Bull. 2023 Dec;39(12):1859-1872. doi: 10.1007/s12264-023-01125-7. Epub 2023 Oct 9.

本文引用的文献

3
Context-conditioned generalization in adaptation to distorted speech.适应失真语音中的语境条件泛化。
J Exp Psychol Hum Percept Perform. 2010 Jun;36(3):704-28. doi: 10.1037/a0017449.
5
What does the right hemisphere know about phoneme categories?右半球对音位范畴了解多少?
J Cogn Neurosci. 2011 Mar;23(3):552-69. doi: 10.1162/jocn.2010.21495. Epub 2010 Mar 29.
9
Temporal lobe white matter asymmetry and language laterality in epilepsy patients.颞叶白质不对称与癫痫患者的语言侧化。
Neuroimage. 2010 Feb 1;49(3):2033-44. doi: 10.1016/j.neuroimage.2009.10.055. Epub 2009 Oct 27.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验