• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

跨模态面部与语音整合的脑关联及统计标准

Cerebral correlates and statistical criteria of cross-modal face and voice integration.

作者信息

Love Scott A, Pollick Frank E, Latinus Marianne

机构信息

School of Psychology, University of Glasgow, 58 Hillhead Street, Glasgow G12 8QB, UK.

出版信息

Seeing Perceiving. 2011;24(4):351-67. doi: 10.1163/187847511X584452.

DOI:10.1163/187847511X584452
PMID:21864459
Abstract

Perception of faces and voices plays a prominent role in human social interaction, making multisensory integration of cross-modal speech a topic of great interest in cognitive neuroscience. How to define potential sites of multisensory integration using functional magnetic resonance imaging (fMRI) is currently under debate, with three statistical criteria frequently used (e.g., super-additive, max and mean criteria). In the present fMRI study, 20 participants were scanned in a block design under three stimulus conditions: dynamic unimodal face, unimodal voice and bimodal face-voice. Using this single dataset, we examine all these statistical criteria in an attempt to define loci of face-voice integration. While the super-additive and mean criteria essentially revealed regions in which one of the unimodal responses was a deactivation, the max criterion appeared stringent and only highlighted the left hippocampus as a potential site of face- voice integration. Psychophysiological interaction analysis showed that connectivity between occipital and temporal cortices increased during bimodal compared to unimodal conditions. We concluded that, when investigating multisensory integration with fMRI, all these criteria should be used in conjunction with manipulation of stimulus signal-to-noise ratio and/or cross-modal congruency.

摘要

对面孔和声音的感知在人类社交互动中起着重要作用,这使得跨模态语音的多感官整合成为认知神经科学中一个备受关注的话题。如何使用功能磁共振成像(fMRI)来定义多感官整合的潜在部位目前仍存在争议,常用的有三种统计标准(例如,超加性、最大值和平均值标准)。在本fMRI研究中,20名参与者在组块设计下接受了三种刺激条件的扫描:动态单模态面孔、单模态声音和双模态面孔-声音。利用这个单一数据集,我们检验了所有这些统计标准,试图确定面孔-声音整合的位点。虽然超加性和平均值标准基本上揭示了单模态反应之一为失活的区域,但最大值标准显得更为严格,仅突出了左侧海马体作为面孔-声音整合的一个潜在部位。心理生理交互作用分析表明,与单模态条件相比,双模态条件下枕叶和颞叶皮质之间的连通性增加。我们得出结论,在使用fMRI研究多感官整合时,所有这些标准都应与刺激信噪比和/或跨模态一致性的操作结合使用。

相似文献

1
Cerebral correlates and statistical criteria of cross-modal face and voice integration.跨模态面部与语音整合的脑关联及统计标准
Seeing Perceiving. 2011;24(4):351-67. doi: 10.1163/187847511X584452.
2
Cross-modal interactions between human faces and voices involved in person recognition.跨模态交互作用涉及人类面孔和声音在人物识别中的应用。
Cortex. 2011 Mar;47(3):367-76. doi: 10.1016/j.cortex.2010.03.003. Epub 2010 Mar 24.
3
The neural network sustaining the crossmodal processing of human gender from faces and voices: an fMRI study.从面孔和声音对人类性别进行跨模态处理的神经网络:一项 fMRI 研究。
Neuroimage. 2011 Jan 15;54(2):1654-61. doi: 10.1016/j.neuroimage.2010.08.073. Epub 2010 Sep 9.
4
Cross-modal interactions during perception of audiovisual speech and nonspeech signals: an fMRI study.听觉-视觉语音和非语音信号感知过程中的跨模态相互作用:一项 fMRI 研究。
J Cogn Neurosci. 2011 Jan;23(1):221-37. doi: 10.1162/jocn.2010.21421.
5
Cerebral representation of non-verbal emotional perception: fMRI reveals audiovisual integration area between voice- and face-sensitive regions in the superior temporal sulcus.非言语情感感知的大脑表征:fMRI 揭示了颞上沟中声音和面孔敏感区域之间的视听整合区域。
Neuropsychologia. 2009 Dec;47(14):3059-66. doi: 10.1016/j.neuropsychologia.2009.07.001. Epub 2009 Jul 21.
6
Hearing facial identities: brain correlates of face--voice integration in person identification.听见面部身份:个体识别中面孔-声音整合的大脑关联。
Cortex. 2011 Oct;47(9):1026-37. doi: 10.1016/j.cortex.2010.11.011. Epub 2010 Dec 4.
7
Interaction of face and voice areas during speaker recognition.说话者识别过程中面部与语音区域的相互作用。
J Cogn Neurosci. 2005 Mar;17(3):367-76. doi: 10.1162/0898929053279577.
8
Detection of audio-visual integration sites in humans by application of electrophysiological criteria to the BOLD effect.通过将电生理标准应用于血氧水平依赖(BOLD)效应来检测人类视听整合部位。
Neuroimage. 2001 Aug;14(2):427-38. doi: 10.1006/nimg.2001.0812.
9
Voice recognition and cross-modal responses to familiar speakers' voices in prosopagnosia.面孔失认症中对熟悉说话者声音的语音识别和跨模态反应。
Cereb Cortex. 2006 Sep;16(9):1314-22. doi: 10.1093/cercor/bhj073. Epub 2005 Nov 9.
10
The neural network sustaining crossmodal integration is impaired in alcohol-dependence: an fMRI study.酒精依赖症患者维持跨模态整合的神经网络受损:一项 fMRI 研究。
Cortex. 2013 Jun;49(6):1610-26. doi: 10.1016/j.cortex.2012.04.012. Epub 2012 May 8.

引用本文的文献

1
Simplified Visual Stimuli Impair Retrieval and Transfer in Audiovisual Equivalence Learning Tasks.简化视觉刺激会损害视听等效学习任务中的记忆提取和迁移。
Brain Behav. 2025 Feb;15(2):e70339. doi: 10.1002/brb3.70339.
2
Socially meaningful visual context either enhances or inhibits vocalisation processing in the macaque brain.社会意义上有意义的视觉语境要么增强要么抑制猕猴大脑中的发声处理。
Nat Commun. 2022 Aug 19;13(1):4886. doi: 10.1038/s41467-022-32512-9.
3
Multisensory stimuli enhance the effectiveness of equivalence learning in healthy children and adolescents.
多感官刺激可提高健康儿童和青少年的等价学习效果。
PLoS One. 2022 Jul 29;17(7):e0271513. doi: 10.1371/journal.pone.0271513. eCollection 2022.
4
The hearing hippocampus.听觉海马体。
Prog Neurobiol. 2022 Nov;218:102326. doi: 10.1016/j.pneurobio.2022.102326. Epub 2022 Jul 21.
5
Multisensory guided associative learning in healthy humans.健康人体的多感觉引导联想学习。
PLoS One. 2019 Mar 12;14(3):e0213094. doi: 10.1371/journal.pone.0213094. eCollection 2019.
6
The Prediction of Impact of a Looming Stimulus onto the Body Is Subserved by Multisensory Integration Mechanisms.即将到来的刺激对身体影响的预测由多感官整合机制完成。
J Neurosci. 2017 Nov 1;37(44):10656-10670. doi: 10.1523/JNEUROSCI.0610-17.2017. Epub 2017 Oct 9.
7
Activation in the angular gyrus and in the pSTS is modulated by face primes during voice recognition.在语音识别过程中,角回和颞上沟后部的激活受面部启动刺激的调节。
Hum Brain Mapp. 2017 May;38(5):2553-2565. doi: 10.1002/hbm.23540. Epub 2017 Feb 20.
8
Crossmodal adaptation in right posterior superior temporal sulcus during face-voice emotional integration.右后上颞叶沟在面孔-声音情绪整合过程中的跨模态适应。
J Neurosci. 2014 May 14;34(20):6813-21. doi: 10.1523/JNEUROSCI.4478-13.2014.
9
Uni- and multisensory brain areas are synchronised across spectators when watching unedited dance recordings.在观看未经剪辑的舞蹈录像时,单感官和多感官脑区在观众之间是同步的。
Iperception. 2013 Jun 3;4(4):265-84. doi: 10.1068/i0536. eCollection 2013.
10
People-selectivity, audiovisual integration and heteromodality in the superior temporal sulcus.优势颞回中的个体选择性、视听整合和异模态
Cortex. 2014 Jan;50:125-36. doi: 10.1016/j.cortex.2013.07.011. Epub 2013 Aug 2.