Suppr超能文献

多感官整合增强音位恢复。

Multisensory integration enhances phonemic restoration.

作者信息

Shahin Antoine J, Miller Lee M

机构信息

Center for Mind & Brain, University of California, Davis, California 95618, USA.

出版信息

J Acoust Soc Am. 2009 Mar;125(3):1744-50. doi: 10.1121/1.3075576.

Abstract

Phonemic restoration occurs when speech is perceived to be continuous through noisy interruptions, even when the speech signal is artificially removed from the interrupted epochs. This temporal filling-in illusion helps maintain robust comprehension in adverse environments and illustrates how contextual knowledge through the auditory modality (e.g., lexical) can improve perception. This study investigated how one important form of context, visual speech, affects phonemic restoration. The hypothesis was that audio-visual integration of speech should improve phonemic restoration, allowing the perceived continuity to span longer temporal gaps. Subjects listened to tri-syllabic words with a portion of each word replaced by white noise while watching lip-movement that was either congruent, temporally reversed (incongruent), or static. For each word, subjects judged whether the utterance sounded continuous or interrupted, where a "continuous" response indicated an illusory percept. Results showed that illusory filling-in of longer white noise durations (longer missing segments) occurred when the mouth movement was congruent with the spoken word compared to the other conditions, with no differences occurring between the static and incongruent conditions. Thus, phonemic restoration is enhanced when applying contextual knowledge through multisensory integration.

摘要

当语音在嘈杂干扰中被感知为连续时,即使语音信号在中断时段被人为移除,也会出现音素恢复现象。这种时间上的填补错觉有助于在不利环境中保持强大的理解能力,并说明了通过听觉模态(如词汇)的上下文知识如何能够改善感知。本研究调查了一种重要的上下文形式,即视觉语音,如何影响音素恢复。假设是语音的视听整合应该会改善音素恢复,使感知到的连续性能够跨越更长的时间间隙。受试者在观看与语音同步、时间反转(不同步)或静止的嘴唇动作时,听每个单词的一部分被白噪声替换的三音节单词。对于每个单词,受试者判断发音听起来是连续的还是中断的,“连续”的回答表示一种错觉感知。结果表明,与其他条件相比,当口型动作与所说单词一致时,会出现对白噪声持续时间更长(缺失段更长)的错觉填补,静止和不同步条件之间没有差异。因此,通过多感官整合应用上下文知识时,音素恢复会得到增强。

相似文献

1
Multisensory integration enhances phonemic restoration.多感官整合增强音位恢复。
J Acoust Soc Am. 2009 Mar;125(3):1744-50. doi: 10.1121/1.3075576.
4
Neural Mechanisms Underlying Cross-Modal Phonetic Encoding.跨模态语音编码的神经机制。
J Neurosci. 2018 Feb 14;38(7):1835-1849. doi: 10.1523/JNEUROSCI.1566-17.2017. Epub 2017 Dec 20.
5
Phonemic restoration: insights from a new methodology.音素恢复:来自一种新方法的见解。
J Exp Psychol Gen. 1981 Dec;110(4):474-94. doi: 10.1037//0096-3445.110.4.474.
6
Perceptual restoration of a "missing" speech sound: auditory induction or illusion?
Percept Psychophys. 1992 Jan;51(1):14-32. doi: 10.3758/bf03205070.

引用本文的文献

6
Children use visual speech to compensate for non-intact auditory speech.儿童使用视觉言语来补偿不完整的听觉言语。
J Exp Child Psychol. 2014 Oct;126:295-312. doi: 10.1016/j.jecp.2014.05.003. Epub 2014 Jul 4.
9
Neural restoration of degraded audiovisual speech.听觉-视觉语音退化的神经恢复。
Neuroimage. 2012 Mar;60(1):530-8. doi: 10.1016/j.neuroimage.2011.11.097. Epub 2011 Dec 10.
10
Speech cues contribute to audiovisual spatial integration.语音提示有助于视听空间整合。
PLoS One. 2011;6(8):e24016. doi: 10.1371/journal.pone.0024016. Epub 2011 Aug 31.

本文引用的文献

2
Neural mechanisms for illusory filling-in of degraded speech.言语降解的虚幻填充的神经机制。
Neuroimage. 2009 Feb 1;44(3):1133-43. doi: 10.1016/j.neuroimage.2008.09.045. Epub 2008 Oct 15.
6
The effect of a flashing visual stimulus on the auditory continuity illusion.
Percept Psychophys. 2007 Apr;69(3):393-9. doi: 10.3758/bf03193760.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验