• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

跨模态感知背后的皮层信息处理的分离与整合

Segregation and Integration of Cortical Information Processing Underlying Cross-Modal Perception.

作者信息

Kumar G Vinodh, Kumar Neeraj, Roy Dipanjan, Banerjee Arpan

机构信息

Cognitive Brain Lab, National Brain Research Centre, NH 8, Manesar, Gurgaon 122051, India.

Centre of Behavioural and Cognitive Sciences, University of Allahabad, Allahabad 211002, India.

出版信息

Multisens Res. 2018 Jan 1;31(5):481-500. doi: 10.1163/22134808-00002574.

DOI:10.1163/22134808-00002574
PMID:31264600
Abstract

Visual cues from the speaker's face influence the perception of speech. An example of this influence is demonstrated by the McGurk-effect where illusory (cross-modal) sounds are perceived following presentation of incongruent audio-visual (AV) stimuli. Previous studies report the engagement of specific cortical modules that are spatially distributed during cross-modal perception. However, the limits of the underlying representational space and the cortical network mechanisms remain unclear. In this combined psychophysical and electroencephalography (EEG) study, the participants reported their perception while listening to a set of synchronous and asynchronous incongruent AV stimuli. We identified the neural representation of subjective cross-modal perception at different organizational levels - at specific locations in sensor space and at the level of the large-scale brain network estimated from between-sensor interactions. We identified an enhanced positivity in the event-related potential peak around 300 ms following stimulus onset associated with cross-modal perception. At the spectral level, cross-modal perception involved an overall decrease in power at the frontal and temporal regions at multiple frequency bands and at all AV lags, along with an increased power at the occipital scalp region for synchronous AV stimuli. At the level of large-scale neuronal networks, enhanced functional connectivity at the gamma band involving frontal regions serves as a marker of AV integration. Thus, we report in one single study that segregation of information processing at individual brain locations and integration of information over candidate brain networks underlie multisensory speech perception.

摘要

说话者面部的视觉线索会影响言语感知。这种影响的一个例子由麦格克效应所证明,即在呈现不一致的视听(AV)刺激后会感知到虚幻的(跨模态)声音。先前的研究报告了在跨模态感知过程中空间分布的特定皮质模块的参与情况。然而,潜在表征空间的限度以及皮质网络机制仍不清楚。在这项结合了心理物理学和脑电图(EEG)的研究中,参与者在听一组同步和异步不一致的AV刺激时报告他们的感知。我们在不同组织层面确定了主观跨模态感知的神经表征——在传感器空间的特定位置以及从传感器间相互作用估计的大规模脑网络层面。我们发现在刺激开始后约300毫秒的事件相关电位峰值处,与跨模态感知相关的正电位增强。在频谱层面,跨模态感知涉及多个频段以及所有AV延迟情况下额叶和颞叶区域功率的整体下降,同时对于同步AV刺激,枕部头皮区域的功率增加。在大规模神经元网络层面,涉及额叶区域的伽马波段功能连接增强是AV整合的一个标志。因此,我们在一项单一研究中报告,个体脑区信息处理的分离以及候选脑网络上信息的整合是多感官言语感知的基础。

相似文献

1
Segregation and Integration of Cortical Information Processing Underlying Cross-Modal Perception.跨模态感知背后的皮层信息处理的分离与整合
Multisens Res. 2018 Jan 1;31(5):481-500. doi: 10.1163/22134808-00002574.
2
Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study.视听言语感知时间整合背后的大规模功能性脑网络:一项脑电图研究
Front Psychol. 2016 Oct 13;7:1558. doi: 10.3389/fpsyg.2016.01558. eCollection 2016.
3
High segregation and diminished global integration in large-scale brain functional networks enhances the perceptual binding of cross-modal stimuli.大脑功能网络的高度分离和全局整合的减弱增强了跨模态刺激的感知绑定。
Cereb Cortex. 2024 Aug 1;34(8). doi: 10.1093/cercor/bhae323.
4
Biophysical mechanisms governing large-scale brain network dynamics underlying individual-specific variability of perception.支配感知个体特异性变异性背后的大规模脑网络动力学的生物物理机制。
Eur J Neurosci. 2020 Oct;52(7):3746-3762. doi: 10.1111/ejn.14747. Epub 2020 Jun 29.
5
Distinct cortical locations for integration of audiovisual speech and the McGurk effect.视听语音整合和麦格克效应的皮质位置不同。
Front Psychol. 2014 Jun 2;5:534. doi: 10.3389/fpsyg.2014.00534. eCollection 2014.
6
Being First Matters: Topographical Representational Similarity Analysis of ERP Signals Reveals Separate Networks for Audiovisual Temporal Binding Depending on the Leading Sense.成为第一很重要:ERP信号的地形表征相似性分析揭示了根据主导感官用于视听时间绑定的不同网络。
J Neurosci. 2017 May 24;37(21):5274-5287. doi: 10.1523/JNEUROSCI.2926-16.2017. Epub 2017 Apr 27.
7
Early and late beta-band power reflect audiovisual perception in the McGurk illusion.早期和晚期β波段功率反映了麦格克效应中的视听感知。
J Neurophysiol. 2015 Apr 1;113(7):2342-50. doi: 10.1152/jn.00783.2014. Epub 2015 Jan 7.
8
Steady-State EEG and Psychophysical Measures of Multisensory Integration to Cross-Modally Synchronous and Asynchronous Acoustic and Vibrotactile Amplitude Modulation Rate.多感官整合对跨模态同步和异步声觉与振动触觉调幅率的稳态脑电图及心理物理学测量
Multisens Res. 2018 Jan 1;31(5):391-418. doi: 10.1163/22134808-00002549.
9
Good times for multisensory integration: Effects of the precision of temporal synchrony as revealed by gamma-band oscillations.多感官整合的黄金时期:伽马波段振荡揭示的时间同步精度的影响
Neuropsychologia. 2007 Feb 1;45(3):561-71. doi: 10.1016/j.neuropsychologia.2006.01.013. Epub 2006 Mar 20.
10
Audio-visual speech processing in age-related hearing loss: Stronger integration and increased frontal lobe recruitment.年龄相关性听力损失中的视听言语处理:更强的整合和更多的额叶招募。
Neuroimage. 2018 Jul 15;175:425-437. doi: 10.1016/j.neuroimage.2018.04.023. Epub 2018 Apr 12.

引用本文的文献

1
Sex differences in development of functional connections in the face processing network.面部处理网络中功能连接发育的性别差异。
J Neuroimaging. 2024 Mar-Apr;34(2):280-290. doi: 10.1111/jon.13185. Epub 2024 Jan 2.