Caudrelier Tiphaine, Ménard Lucie, Beausoleil Marie-Michèle, Martin Clara D, Samuel Arthur G
Laboratoire d'Etude des Mécanismes Cognitifs, Université Lumière Lyon 2, 5 avenue Pierre Mendès France, 69676 BRON Cedex, Lyon, France.
Basque Center on Cognition Brain and Language (BCBL), Paseo Mikeletegi 69, Gipuzkoa, San Sebastian 20009, Spain.
PNAS Nexus. 2024 Aug 23;3(9):pgae354. doi: 10.1093/pnasnexus/pgae354. eCollection 2024 Sep.
Humans are remarkably good at understanding spoken language, despite the huge variability of the signal as a function of the talker, the situation, and the environment. This success relies on having access to stable representations based on years of speech input, coupled with the ability to adapt to short-term deviations from these norms, e.g. accented speech or speech altered by ambient noise. In the last two decades, there has been a robust research effort focused on a possible mechanism for adjusting to accented speech. In these studies, listeners typically hear 15 - 20 words in which a speech sound has been altered, creating a short-term deviation from its longer-term representation. After exposure to these items, listeners demonstrate "lexically driven phonetic recalibration"-they alter their categorization of speech sounds, expanding a speech category to take into account the recently heard deviations from their long-term representations. In the current study, we investigate such adjustments by bilingual listeners. French-English bilinguals were first exposed to nonstandard pronunciations of a sound (/s/ or /f/) in one language and tested for recalibration in both languages. Then, the exposure continued with both the original type of mispronunciation in the same language, plus mispronunciations in the other language, in the opposite direction. In a final test, we found simultaneous recalibration in opposite directions for the two languages-listeners shifted their French perception in one direction and their English in the other: Bilinguals can maintain separate adjustments, for the same sounds, when a talker's speech differs across two languages.
尽管语音信号会因说话者、情境和环境的不同而存在巨大差异,但人类却非常擅长理解口语。这种成功依赖于基于多年语音输入所获得的稳定表征,以及适应与这些规范的短期偏差的能力,例如带有口音的语音或被环境噪音改变的语音。在过去二十年里,人们进行了大量深入的研究,聚焦于一种适应带口音语音的可能机制。在这些研究中,听众通常会听到15至20个单词,其中某个语音被改变,从而产生与长期表征的短期偏差。接触这些单词后,听众会表现出“词汇驱动的语音重新校准”——他们会改变对语音的分类,扩大语音类别以考虑到最近听到的与长期表征的偏差。在当前的研究中,我们调查了双语听众的这种调整情况。法英双语者首先接触一种语言中某个音(/s/或/f/)的非标准发音,并在两种语言中测试重新校准情况。然后,继续接触同一种语言中原来类型的发音错误,以及另一种语言中相反方向的发音错误。在最后一次测试中,我们发现两种语言在相反方向上同时进行了重新校准——听众在一种语言中对法语的感知向一个方向转变,而在另一种语言中对英语的感知向另一个方向转变:当说话者在两种语言中的语音不同时,双语者可以对相同的音保持不同的调整。