Suppr超能文献

视听言语感知时间整合背后的大规模功能性脑网络:一项脑电图研究

Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study.

作者信息

Kumar G Vinodh, Halder Tamesh, Jaiswal Amit K, Mukherjee Abhishek, Roy Dipanjan, Banerjee Arpan

机构信息

Cognitive Brain Lab, National Brain Research Centre Gurgaon, India.

Centre for Behavioural and Cognitive Sciences, University of Allahabad Allahabad, India.

出版信息

Front Psychol. 2016 Oct 13;7:1558. doi: 10.3389/fpsyg.2016.01558. eCollection 2016.

Abstract

Observable lip movements of the speaker influence perception of auditory speech. A classical example of this influence is reported by listeners who perceive an illusory (cross-modal) speech sound (McGurk-effect) when presented with incongruent audio-visual (AV) speech stimuli. Recent neuroimaging studies of AV speech perception accentuate the role of frontal, parietal, and the integrative brain sites in the vicinity of the superior temporal sulcus (STS) for multisensory speech perception. However, if and how does the network across the whole brain participates during multisensory perception processing remains an open question. We posit that a large-scale functional connectivity among the neural population situated in distributed brain sites may provide valuable insights involved in processing and fusing of AV speech. Varying the psychophysical parameters in tandem with electroencephalogram (EEG) recordings, we exploited the trial-by-trial perceptual variability of incongruent audio-visual (AV) speech stimuli to identify the characteristics of the large-scale cortical network that facilitates multisensory perception during synchronous and asynchronous AV speech. We evaluated the spectral landscape of EEG signals during multisensory speech perception at varying AV lags. Functional connectivity dynamics for all sensor pairs was computed using the time-frequency global coherence, the vector sum of pairwise coherence changes over time. During synchronous AV speech, we observed enhanced global gamma-band coherence and decreased alpha and beta-band coherence underlying cross-modal (illusory) perception compared to unisensory perception around a temporal window of 300-600 ms following onset of stimuli. During asynchronous speech stimuli, a global broadband coherence was observed during cross-modal perception at earlier times along with pre-stimulus decreases of lower frequency power, e.g., alpha rhythms for positive AV lags and theta rhythms for negative AV lags. Thus, our study indicates that the temporal integration underlying multisensory speech perception requires to be understood in the framework of large-scale functional brain network mechanisms in addition to the established cortical loci of multisensory speech perception.

摘要

说话者可观察到的唇部动作会影响对听觉语音的感知。这种影响的一个经典例子是,听众在面对不一致的视听(AV)语音刺激时,会感知到一种虚幻的(跨模态)语音声音(麦格克效应)。最近关于AV语音感知的神经影像学研究强调了额叶、顶叶以及颞上沟(STS)附近的整合脑区在多感官语音感知中的作用。然而,在多感官感知处理过程中,整个大脑的网络是否以及如何参与仍然是一个悬而未决的问题。我们认为,位于分布式脑区的神经群体之间的大规模功能连接可能为AV语音的处理和融合提供有价值的见解。通过将心理物理学参数与脑电图(EEG)记录相结合,我们利用不一致视听(AV)语音刺激的逐次试验感知变异性,来识别在同步和异步AV语音期间促进多感官感知的大规模皮层网络的特征。我们评估了在不同AV延迟下多感官语音感知期间EEG信号的频谱特征。使用时频全局相干性计算所有传感器对的功能连接动态,即成对相干性随时间变化的矢量和。在同步AV语音期间,与刺激开始后300 - 600毫秒时间窗口内的单感官感知相比,我们观察到跨模态(虚幻)感知下的全局伽马波段相干性增强,而阿尔法和贝塔波段相干性降低。在异步语音刺激期间,在跨模态感知的早期观察到全局宽带相干性,同时低频功率在刺激前下降,例如正AV延迟时的阿尔法节律和负AV延迟时的西塔节律。因此,我们的研究表明,除了已确定的多感官语音感知的皮层位点外,还需要在大规模功能性脑网络机制的框架内理解多感官语音感知背后的时间整合。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c12e/5062921/0cbbeccf76e4/fpsyg-07-01558-g0001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验