Suppr超能文献

麦格克效应和其他视听言语整合效应的心理物理学

Psychophysics of the McGurk and other audiovisual speech integration effects.

机构信息

Division of Communication and Auditory Neuroscience, House Ear Institute, Los Angeles, California, USA.

出版信息

J Exp Psychol Hum Percept Perform. 2011 Aug;37(4):1193-209. doi: 10.1037/a0023100.

Abstract

When the auditory and visual components of spoken audiovisual nonsense syllables are mismatched, perceivers produce four different types of perceptual responses, auditory correct, visual correct, fusion (the so-called McGurk effect), and combination (i.e., two consonants are reported). Here, quantitative measures were developed to account for the distribution of the four types of perceptual responses to 384 different stimuli from four talkers. The measures included mutual information, correlations, and acoustic measures, all representing audiovisual stimulus relationships. In Experiment 1, open-set perceptual responses were obtained for acoustic /bɑ/ or /lɑ/ dubbed to video /bɑ, dɑ, gɑ, vɑ, zɑ, lɑ, wɑ, ðɑ/. The talker, the video syllable, and the acoustic syllable significantly influenced the type of response. In Experiment 2, the best predictors of response category proportions were a subset of the physical stimulus measures, with the variance accounted for in the perceptual response category proportions between 17% and 52%. That audiovisual stimulus relationships can account for perceptual response distributions supports the possibility that internal representations are based on modality-specific stimulus relationships.

摘要

当口语视听无意义音节的听觉和视觉成分不匹配时,感知者会产生四种不同类型的感知反应,即听觉正确、视觉正确、融合(所谓的麦格克效应)和组合(即报告两个辅音)。在这里,开发了定量措施来解释对来自四个说话者的 384 个不同刺激的四种感知反应的分布。这些措施包括互信息、相关性和声学措施,它们都代表视听刺激关系。在实验 1 中,为视频 /bɑ、dɑ、gɑ、vɑ、zɑ、lɑ、wɑ、ðɑ/ 配音的声学 /bɑ/ 或 /lɑ/ 获得了开放式感知反应。说话者、视频音节和声学音节对反应类型有显著影响。在实验 2 中,反应类别比例的最佳预测因子是物理刺激测量的一个子集,感知反应类别比例的方差在 17%到 52%之间。视听刺激关系可以解释感知反应分布,这支持了内部表示基于模态特定刺激关系的可能性。

相似文献

1
Psychophysics of the McGurk and other audiovisual speech integration effects.
J Exp Psychol Hum Percept Perform. 2011 Aug;37(4):1193-209. doi: 10.1037/a0023100.
3
Speech-specific audiovisual integration modulates induced theta-band oscillations.
PLoS One. 2019 Jul 16;14(7):e0219744. doi: 10.1371/journal.pone.0219744. eCollection 2019.
5
A Causal Inference Model Explains Perception of the McGurk Effect and Other Incongruent Audiovisual Speech.
PLoS Comput Biol. 2017 Feb 16;13(2):e1005229. doi: 10.1371/journal.pcbi.1005229. eCollection 2017 Feb.
6
Timing in audiovisual speech perception: A mini review and new psychophysical data.
Atten Percept Psychophys. 2016 Feb;78(2):583-601. doi: 10.3758/s13414-015-1026-y.
8
Neural Mechanisms Underlying Cross-Modal Phonetic Encoding.
J Neurosci. 2018 Feb 14;38(7):1835-1849. doi: 10.1523/JNEUROSCI.1566-17.2017. Epub 2017 Dec 20.
9
Metacognition in the audiovisual McGurk illusion: perceptual and causal confidence.
Philos Trans R Soc Lond B Biol Sci. 2023 Sep 25;378(1886):20220348. doi: 10.1098/rstb.2022.0348. Epub 2023 Aug 7.
10
Audiovisual speech perception: Moving beyond McGurk.
J Acoust Soc Am. 2022 Dec;152(6):3216. doi: 10.1121/10.0015262.

引用本文的文献

2
Neural speech tracking in a virtual acoustic environment: audio-visual benefit for unscripted continuous speech.
Front Hum Neurosci. 2025 Apr 9;19:1560558. doi: 10.3389/fnhum.2025.1560558. eCollection 2025.
3
The McGurk effect is similar in native Mandarin Chinese and American English speakers.
Front Psychol. 2025 Mar 28;16:1531566. doi: 10.3389/fpsyg.2025.1531566. eCollection 2025.
4
The noisy encoding of disparity model predicts perception of the McGurk effect in native Japanese speakers.
Front Neurosci. 2024 Jun 26;18:1421713. doi: 10.3389/fnins.2024.1421713. eCollection 2024.
6
The Impact of Singing on Visual and Multisensory Speech Perception in Children on the Autism Spectrum.
Multisens Res. 2022 Dec 30;36(1):57-74. doi: 10.1163/22134808-bja10087.
7
The neural bases of multimodal sensory integration in older adults.
Int J Behav Dev. 2021 Sep 1;45(5):409-417. doi: 10.1177/0165025420979362. Epub 2021 Jan 11.
8
Rethinking the McGurk effect as a perceptual illusion.
Atten Percept Psychophys. 2021 Aug;83(6):2583-2598. doi: 10.3758/s13414-021-02265-6. Epub 2021 Apr 21.

本文引用的文献

1
Speech Perception as a Multimodal Phenomenon.
Curr Dir Psychol Sci. 2008 Dec;17(6):405-409. doi: 10.1111/j.1467-8721.2008.00615.x.
2
The natural statistics of audiovisual speech.
PLoS Comput Biol. 2009 Jul;5(7):e1000436. doi: 10.1371/journal.pcbi.1000436. Epub 2009 Jul 17.
3
Mismatch negativity with visual-only and audiovisual speech.
Brain Topogr. 2009 May;21(3-4):207-15. doi: 10.1007/s10548-009-0094-5. Epub 2009 Apr 30.
5
Quantified acoustic-optical speech signal incongruity identifies cortical sites of audiovisual speech processing.
Brain Res. 2008 Nov 25;1242:172-84. doi: 10.1016/j.brainres.2008.04.018. Epub 2008 Apr 18.
6
An event-related fMRI investigation of voice-onset time discrimination.
Neuroimage. 2008 Mar 1;40(1):342-52. doi: 10.1016/j.neuroimage.2007.10.064. Epub 2007 Nov 21.
7
Abstract coding of audiovisual speech: beyond sensory representation.
Neuron. 2007 Dec 20;56(6):1116-26. doi: 10.1016/j.neuron.2007.09.037.
8
McGurk effects in cochlear-implanted deaf subjects.
Brain Res. 2008 Jan 10;1188:87-99. doi: 10.1016/j.brainres.2007.10.049. Epub 2007 Oct 26.
9
Similarity structure in visual speech perception and optical phonetic signals.
Percept Psychophys. 2007 Oct;69(7):1070-83. doi: 10.3758/bf03193945.
10
The processing of audio-visual speech: empirical and neural bases.
Philos Trans R Soc Lond B Biol Sci. 2008 Mar 12;363(1493):1001-10. doi: 10.1098/rstb.2007.2155.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验