• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Psychophysics of the McGurk and other audiovisual speech integration effects.麦格克效应和其他视听言语整合效应的心理物理学
J Exp Psychol Hum Percept Perform. 2011 Aug;37(4):1193-209. doi: 10.1037/a0023100.
2
Perceptual uncertainty explains activation differences between audiovisual congruent speech and McGurk stimuli.感知不确定性解释了视听一致的语音和麦格克刺激之间激活差异的原因。
Hum Brain Mapp. 2024 Mar;45(4):e26653. doi: 10.1002/hbm.26653.
3
Speech-specific audiovisual integration modulates induced theta-band oscillations.语音特异性视听整合调制诱导的 theta 波段振荡。
PLoS One. 2019 Jul 16;14(7):e0219744. doi: 10.1371/journal.pone.0219744. eCollection 2019.
4
Processing of audiovisually congruent and incongruent speech in school-age children with a history of specific language impairment: a behavioral and event-related potentials study.有特定语言障碍史的学龄儿童对视听一致和不一致言语的加工:一项行为学和事件相关电位研究。
Dev Sci. 2015 Sep;18(5):751-70. doi: 10.1111/desc.12263. Epub 2014 Nov 29.
5
A Causal Inference Model Explains Perception of the McGurk Effect and Other Incongruent Audiovisual Speech.一种因果推理模型解释了麦格克效应及其他不一致视听言语的感知。
PLoS Comput Biol. 2017 Feb 16;13(2):e1005229. doi: 10.1371/journal.pcbi.1005229. eCollection 2017 Feb.
6
Timing in audiovisual speech perception: A mini review and new psychophysical data.视听言语感知中的时间因素:一篇小型综述及新的心理物理学数据
Atten Percept Psychophys. 2016 Feb;78(2):583-601. doi: 10.3758/s13414-015-1026-y.
7
Behavioral Response Modeling to Resolve Listener- and Stimulus-Related Influences on Audiovisual Speech Integration in Cochlear Implant Users.行为反应建模以解决与听众和刺激相关的因素对人工耳蜗使用者视听语音整合的影响。
Ear Hear. 2025;46(3):596-606. doi: 10.1097/AUD.0000000000001607. Epub 2024 Dec 11.
8
Neural Mechanisms Underlying Cross-Modal Phonetic Encoding.跨模态语音编码的神经机制。
J Neurosci. 2018 Feb 14;38(7):1835-1849. doi: 10.1523/JNEUROSCI.1566-17.2017. Epub 2017 Dec 20.
9
Metacognition in the audiovisual McGurk illusion: perceptual and causal confidence.视听麦格克错觉中的元认知:知觉和因果信心。
Philos Trans R Soc Lond B Biol Sci. 2023 Sep 25;378(1886):20220348. doi: 10.1098/rstb.2022.0348. Epub 2023 Aug 7.
10
Audiovisual speech perception: Moving beyond McGurk.视听言语感知:超越麦格克效应。
J Acoust Soc Am. 2022 Dec;152(6):3216. doi: 10.1121/10.0015262.

引用本文的文献

1
Variations in unisensory speech perception explain interindividual differences in McGurk illusion susceptibility.单感官言语感知的差异解释了个体在麦格克错觉易感性上的个体间差异。
Psychon Bull Rev. 2025 Apr 24. doi: 10.3758/s13423-025-02697-3.
2
Neural speech tracking in a virtual acoustic environment: audio-visual benefit for unscripted continuous speech.虚拟声学环境中的神经语音跟踪:非脚本连续语音的视听益处
Front Hum Neurosci. 2025 Apr 9;19:1560558. doi: 10.3389/fnhum.2025.1560558. eCollection 2025.
3
The McGurk effect is similar in native Mandarin Chinese and American English speakers.麦格克效应在以普通话为母语的中国人和以美式英语为母语的人中表现相似。
Front Psychol. 2025 Mar 28;16:1531566. doi: 10.3389/fpsyg.2025.1531566. eCollection 2025.
4
The noisy encoding of disparity model predicts perception of the McGurk effect in native Japanese speakers.视差模型的噪声编码预测了以日语为母语者对麦格克效应的感知。
Front Neurosci. 2024 Jun 26;18:1421713. doi: 10.3389/fnins.2024.1421713. eCollection 2024.
5
Investigation of Cross-Language and Stimulus-Dependent Effects on the McGurk Effect with Finnish and Japanese Speakers and Listeners.以芬兰语和日语使用者及听众为对象,对跨语言和刺激依赖效应在麦格克效应上的研究。
Brain Sci. 2023 Aug 13;13(8):1198. doi: 10.3390/brainsci13081198.
6
The Impact of Singing on Visual and Multisensory Speech Perception in Children on the Autism Spectrum.唱歌对自闭症谱系儿童视觉和多感官言语感知的影响。
Multisens Res. 2022 Dec 30;36(1):57-74. doi: 10.1163/22134808-bja10087.
7
The neural bases of multimodal sensory integration in older adults.老年人多模态感觉整合的神经基础。
Int J Behav Dev. 2021 Sep 1;45(5):409-417. doi: 10.1177/0165025420979362. Epub 2021 Jan 11.
8
Rethinking the McGurk effect as a perceptual illusion.将麦格克效应重新思考为一种知觉错觉。
Atten Percept Psychophys. 2021 Aug;83(6):2583-2598. doi: 10.3758/s13414-021-02265-6. Epub 2021 Apr 21.
9
Weak observer-level correlation and strong stimulus-level correlation between the McGurk effect and audiovisual speech-in-noise: A causal inference explanation.麦格克效应与视听言语噪声之间的弱观察者水平相关性和强刺激水平相关性:一种因果推理解释
Cortex. 2020 Dec;133:371-383. doi: 10.1016/j.cortex.2020.10.002. Epub 2020 Oct 17.
10
The phase of cortical oscillations determines the perceptual fate of visual cues in naturalistic audiovisual speech.皮质振荡的相位决定了自然视听语音中视觉线索的感知命运。
Sci Adv. 2020 Nov 4;6(45). doi: 10.1126/sciadv.abc6348. Print 2020 Nov.

本文引用的文献

1
Speech Perception as a Multimodal Phenomenon.作为一种多模态现象的言语感知
Curr Dir Psychol Sci. 2008 Dec;17(6):405-409. doi: 10.1111/j.1467-8721.2008.00615.x.
2
The natural statistics of audiovisual speech.视听语音的自然统计学
PLoS Comput Biol. 2009 Jul;5(7):e1000436. doi: 10.1371/journal.pcbi.1000436. Epub 2009 Jul 17.
3
Mismatch negativity with visual-only and audiovisual speech.仅视觉和视听言语的失配负波。
Brain Topogr. 2009 May;21(3-4):207-15. doi: 10.1007/s10548-009-0094-5. Epub 2009 Apr 30.
4
A linear model of acoustic-to-facial mapping: model parameters, data set size, and generalization across speakers.声学到面部映射的线性模型:模型参数、数据集大小及跨说话者的泛化能力
J Acoust Soc Am. 2008 Nov;124(5):3183-90. doi: 10.1121/1.2982369.
5
Quantified acoustic-optical speech signal incongruity identifies cortical sites of audiovisual speech processing.量化的声光语音信号不一致性可识别视听语音处理的皮层部位。
Brain Res. 2008 Nov 25;1242:172-84. doi: 10.1016/j.brainres.2008.04.018. Epub 2008 Apr 18.
6
An event-related fMRI investigation of voice-onset time discrimination.一项关于语音起始时间辨别能力的事件相关功能磁共振成像研究。
Neuroimage. 2008 Mar 1;40(1):342-52. doi: 10.1016/j.neuroimage.2007.10.064. Epub 2007 Nov 21.
7
Abstract coding of audiovisual speech: beyond sensory representation.视听言语的抽象编码:超越感官表征
Neuron. 2007 Dec 20;56(6):1116-26. doi: 10.1016/j.neuron.2007.09.037.
8
McGurk effects in cochlear-implanted deaf subjects.人工耳蜗植入失聪受试者中的麦格克效应
Brain Res. 2008 Jan 10;1188:87-99. doi: 10.1016/j.brainres.2007.10.049. Epub 2007 Oct 26.
9
Similarity structure in visual speech perception and optical phonetic signals.视觉言语感知与光学语音信号中的相似性结构。
Percept Psychophys. 2007 Oct;69(7):1070-83. doi: 10.3758/bf03193945.
10
The processing of audio-visual speech: empirical and neural bases.视听言语的处理:实证与神经基础。
Philos Trans R Soc Lond B Biol Sci. 2008 Mar 12;363(1493):1001-10. doi: 10.1098/rstb.2007.2155.

麦格克效应和其他视听言语整合效应的心理物理学

Psychophysics of the McGurk and other audiovisual speech integration effects.

机构信息

Division of Communication and Auditory Neuroscience, House Ear Institute, Los Angeles, California, USA.

出版信息

J Exp Psychol Hum Percept Perform. 2011 Aug;37(4):1193-209. doi: 10.1037/a0023100.

DOI:10.1037/a0023100
PMID:21574741
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC3149717/
Abstract

When the auditory and visual components of spoken audiovisual nonsense syllables are mismatched, perceivers produce four different types of perceptual responses, auditory correct, visual correct, fusion (the so-called McGurk effect), and combination (i.e., two consonants are reported). Here, quantitative measures were developed to account for the distribution of the four types of perceptual responses to 384 different stimuli from four talkers. The measures included mutual information, correlations, and acoustic measures, all representing audiovisual stimulus relationships. In Experiment 1, open-set perceptual responses were obtained for acoustic /bɑ/ or /lɑ/ dubbed to video /bɑ, dɑ, gɑ, vɑ, zɑ, lɑ, wɑ, ðɑ/. The talker, the video syllable, and the acoustic syllable significantly influenced the type of response. In Experiment 2, the best predictors of response category proportions were a subset of the physical stimulus measures, with the variance accounted for in the perceptual response category proportions between 17% and 52%. That audiovisual stimulus relationships can account for perceptual response distributions supports the possibility that internal representations are based on modality-specific stimulus relationships.

摘要

当口语视听无意义音节的听觉和视觉成分不匹配时,感知者会产生四种不同类型的感知反应,即听觉正确、视觉正确、融合(所谓的麦格克效应)和组合(即报告两个辅音)。在这里,开发了定量措施来解释对来自四个说话者的 384 个不同刺激的四种感知反应的分布。这些措施包括互信息、相关性和声学措施,它们都代表视听刺激关系。在实验 1 中,为视频 /bɑ、dɑ、gɑ、vɑ、zɑ、lɑ、wɑ、ðɑ/ 配音的声学 /bɑ/ 或 /lɑ/ 获得了开放式感知反应。说话者、视频音节和声学音节对反应类型有显著影响。在实验 2 中,反应类别比例的最佳预测因子是物理刺激测量的一个子集,感知反应类别比例的方差在 17%到 52%之间。视听刺激关系可以解释感知反应分布,这支持了内部表示基于模态特定刺激关系的可能性。