Suppr超能文献

利用视听语音感知来测量预期协同发音。

Leveraging audiovisual speech perception to measure anticipatory coarticulation.

机构信息

Department of Linguistics, University of Oregon, Eugene, Oregon 97403, USA.

Department of Linguistics, University of British Columbia, Vancouver, British Columbia, Canada.

出版信息

J Acoust Soc Am. 2018 Oct;144(4):2447. doi: 10.1121/1.5064783.

Abstract

A noninvasive method for accurately measuring anticipatory coarticulation at experimentally defined temporal locations is introduced. The method leverages work in audiovisual (AV) speech perception to provide a synthetic and robust measure that can be used to inform psycholinguistic theory. In this validation study, speakers were audio-video recorded while producing simple subject-verb-object sentences with contrasting object noun rhymes. Coarticulatory resistance of target noun onsets was manipulated as was metrical context for the determiner that modified the noun. Individual sentences were then gated from the verb to sentence end at segmental landmarks. These stimuli were presented to perceivers who were tasked with guessing the sentence-final rhyme. An audio-only condition was included to estimate the contribution of visual information to perceivers' performance. Findings were that perceivers accurately identified rhymes earlier in the AV condition than in the audio-only condition (i.e., at determiner onset vs determiner vowel). Effects of coarticulatory resistance and metrical context were similar across conditions and consistent with previous work on coarticulation. These findings were further validated with acoustic measurement of the determiner vowel and a cumulative video-based measure of perioral movement. Overall, gated AV speech perception can be used to test specific hypotheses regarding coarticulatory scope and strength in running speech.

摘要

本文介绍了一种可在实验定义的时间位置上准确测量预备协同发音的非侵入性方法。该方法利用视听(AV)语音感知方面的工作,提供了一种综合且稳健的度量标准,可用于为心理语言学理论提供信息。在这项验证研究中,要求说话者在说出带有对比对象名词押韵的简单主谓宾句子时进行音频-视频录制。目标名词开头的协同发音阻力和修饰名词的限定词的韵律环境都被改变了。然后,将句子从动词到句尾的分段标记进行门控。这些刺激呈现给感知者,让他们猜测句末的押韵。还包括一个仅音频条件,以估计视觉信息对感知者表现的贡献。研究结果表明,感知者在视听条件下比在仅音频条件下更准确地识别出押韵(即在限定词开头与限定词元音)。协同发音阻力和韵律环境的影响在不同条件下相似,与协同发音的先前研究一致。这些发现还通过对限定词元音的声学测量和基于视频的口周运动的累积测量进行了验证。总的来说,门控式 AV 语音感知可用于测试关于连续言语中协同发音范围和强度的特定假设。

相似文献

6
Anticipatory coarticulation facilitates word recognition in toddlers.预期协同发音有助于幼儿的单词识别。
Cognition. 2015 Sep;142:345-50. doi: 10.1016/j.cognition.2015.05.009. Epub 2015 Jun 11.
9
Visual context constrains language-mediated anticipatory eye movements.视觉语境会限制语言介导的预期眼动。
Q J Exp Psychol (Hove). 2020 Mar;73(3):458-467. doi: 10.1177/1747021819881615. Epub 2019 Oct 17.

本文引用的文献

4
Effects of utterance length and vocal loudness on speech breathing in older adults.话语长度和声音响度对老年人言语呼吸的影响。
Respir Physiol Neurobiol. 2008 Dec 31;164(3):323-30. doi: 10.1016/j.resp.2008.08.007. Epub 2008 Aug 28.
8
Somatosensory basis of speech production.言语产生的躯体感觉基础。
Nature. 2003 Jun 19;423(6942):866-9. doi: 10.1038/nature01710.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验