Suppr超能文献

视听语音中内感觉同步的感知:并非那么特殊。

Perception of intersensory synchrony in audiovisual speech: not that special.

机构信息

Tilburg University, Department of Medical Psychology and Neuropsychology, P.O. Box 90153, 5000 LE Tilburg, The Netherlands.

出版信息

Cognition. 2011 Jan;118(1):75-83. doi: 10.1016/j.cognition.2010.10.002. Epub 2010 Oct 29.

Abstract

Perception of intersensory temporal order is particularly difficult for (continuous) audiovisual speech, as perceivers may find it difficult to notice substantial timing differences between speech sounds and lip movements. Here we tested whether this occurs because audiovisual speech is strongly paired ("unity assumption"). Participants made temporal order judgments (TOJ) and simultaneity judgments (SJ) about sine-wave speech (SWS) replicas of pseudowords and the corresponding video of the face. Listeners in speech and non-speech mode were equally sensitive judging audiovisual temporal order. Yet, using the McGurk effect, we could demonstrate that the sound was more likely integrated with lipread speech if heard as speech than non-speech. Judging temporal order in audiovisual speech is thus unaffected by whether the auditory and visual streams are paired. Conceivably, previously found differences between speech and non-speech stimuli are not due to the putative "special" nature of speech, but rather reflect low-level stimulus differences.

摘要

对(连续)视听语音的感觉时间顺序感知特别困难,因为感知者可能很难注意到语音声音和唇动之间的实质性时间差异。在这里,我们测试了这是否是因为视听语音强烈配对(“统一假设”)。参与者对正弦波语音(SWS)伪词的副本以及相应的面部视频进行了时间顺序判断(TOJ)和同时性判断(SJ)。在语音和非语音模式下,听众对视听时间顺序的判断都同样敏感。然而,通过使用麦格克效应,我们可以证明,如果声音被听到为语音而不是非语音,则更有可能与唇读语音整合。因此,判断视听语音的时间顺序不受听觉和视觉流是否配对的影响。可以想象,先前在语音和非语音刺激之间发现的差异不是由于所谓的“特殊”语音性质,而是反映了较低水平的刺激差异。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验