Suppr超能文献

当语境有帮助和没有帮助的时候:自然语言语料库研究。

When context is and isn't helpful: A corpus study of naturalistic speech.

机构信息

Department of Linguistics, Northwestern University, 2016 Sheridan Road, Evanston, IL, 60208, USA.

RIKEN Center for Brain Science, Wako, Japan.

出版信息

Psychon Bull Rev. 2020 Aug;27(4):640-676. doi: 10.3758/s13423-019-01687-6.

Abstract

Infants learn about the sounds of their language and adults process the sounds they hear, even though sound categories often overlap in their acoustics. Researchers have suggested that listeners rely on context for these tasks, and have proposed two main ways that context could be helpful: top-down information accounts, which argue that listeners use context to predict which sound will be produced, and normalization accounts, which argue that listeners compensate for the fact that the same sound is produced differently in different contexts by factoring out this systematic context-dependent variability from the acoustics. These ideas have been somewhat conflated in past research, and have rarely been tested on naturalistic speech. We implement top-down and normalization accounts separately and evaluate their relative efficacy on spontaneous speech, using the test case of Japanese vowels. We find that top-down information strategies are effective even on spontaneous speech. Surprisingly, we find that at least one common implementation of normalization is ineffective on spontaneous speech, in contrast to what has been found on lab speech. We provide analyses showing that when there are systematic regularities in which contexts different sounds occur in-which are common in naturalistic speech, but generally controlled for in lab speech-normalization can actually increase category overlap rather than decrease it. This work calls into question the usefulness of normalization in naturalistic listening tasks, and highlights the importance of applying ideas from carefully controlled lab speech to naturalistic, spontaneous speech.

摘要

婴儿学习他们语言的声音,成年人处理他们听到的声音,即使声音类别在声学上经常重叠。研究人员认为,听众依赖于上下文来完成这些任务,并提出了上下文可以提供帮助的两种主要方式:自上而下的信息解释,认为听众使用上下文来预测将产生哪种声音;归一化解释,认为听众通过从声学中剔除这种系统的上下文相关可变性来补偿相同的声音在不同的上下文中产生的不同方式。这些想法在过去的研究中有些混淆,并且很少在自然语言上进行测试。我们分别实现了自上而下和归一化的解释,并在自然语言的情况下评估了它们的相对有效性,以日语元音为例。我们发现,即使在自然语言中,自上而下的信息策略也很有效。令人惊讶的是,我们发现归一化的至少一种常见实现对自然语言无效,这与在实验室语音中发现的情况不同。我们提供了分析表明,当存在系统的规律时,即不同的声音在哪些上下文中出现,这在自然语言中很常见,但在实验室语音中通常会受到控制,归一化实际上会增加类别重叠,而不是减少类别重叠。这项工作质疑了归一化在自然语言听力任务中的有用性,并强调了将精心控制的实验室语音中的想法应用于自然、自发的语音的重要性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验