Escudero Paola, Smit Eline A, Mulak Karen E
The MARCS Institute for Brain, Behaviour, and Development, Western Sydney University, Penrith, NSW 2751, Australia.
Australian Research Council Centre of Excellence for the Dynamics of Language, Canberra, ACT 2601, Australia.
Brain Sci. 2022 Nov 25;12(12):1618. doi: 10.3390/brainsci12121618.
Adults commonly struggle with perceiving and recognizing the sounds and words of a second language (L2), especially when the L2 sounds do not have a counterpart in the learner's first language (L1). We examined how L1 Mandarin L2 English speakers learned pseudo English words within a cross-situational word learning (CSWL) task previously presented to monolingual English and bilingual Mandarin-English speakers. CSWL is ambiguous because participants are not provided with direct mappings of words and object referents. Rather, learners discern word-object correspondences through tracking multiple co-occurrences across learning trials. The monolinguals and bilinguals tested in previous studies showed lower performance for pseudo words that formed vowel minimal pairs (e.g., /dit/-/dɪt/) than pseudo word which formed consonant minimal pairs (e.g., /bɔn/-/pɔn/) or non-minimal pairs which differed in all segments (e.g., /bɔn/-/dit/). In contrast, L1 Mandarin L2 English listeners struggled to learn all word pairs. We explain this seemingly contradicting finding by considering the multiplicity of acoustic cues in the stimuli presented to all participant groups. Stimuli were produced in infant-directed-speech (IDS) in order to compare performance by children and adults and because previous research had shown that IDS enhances L1 and L2 acquisition. We propose that the suprasegmental pitch variation in the vowels typical of IDS stimuli might be perceived as lexical tone distinctions for tonal language speakers who cannot fully inhibit their L1 activation, resulting in high lexical competition and diminished learning during an ambiguous word learning task. Our results are in line with the Second Language Linguistic Perception (L2LP) model which proposes that fine-grained acoustic information from multiple sources and the ability to switch between language modes affects non-native phonetic and lexical development.
成年人在感知和识别第二语言(L2)的语音和单词时通常会遇到困难,尤其是当第二语言的语音在学习者的第一语言(L1)中没有对应物时。我们研究了以普通话为第一语言、英语为第二语言的人在跨情境单词学习(CSWL)任务中是如何学习伪英语单词的,该任务之前曾用于以英语为母语的单语者和普通话-英语双语者。CSWL具有模糊性,因为参与者没有被提供单词与物体指称的直接映射。相反,学习者通过跟踪学习试验中的多个共现情况来辨别单词与物体的对应关系。在之前的研究中测试的单语者和双语者对形成元音最小对(例如,/dit/-/dɪt/)的伪单词的表现低于形成辅音最小对(例如,/bɔn/-/pɔn/)或在所有音段上都不同的非最小对(例如,/bɔn/-/dit/)的伪单词。相比之下,以普通话为第一语言、英语为第二语言的听众难以学习所有的单词对。我们通过考虑呈现给所有参与者组的刺激中的声学线索的多样性来解释这一看似矛盾的发现。刺激是以婴儿导向语(IDS)产生的,以便比较儿童和成年人的表现,并且因为之前的研究表明IDS能促进第一语言和第二语言的习得。我们提出,IDS刺激中典型的元音超音段音高变化可能会被无法完全抑制其第一语言激活的声调语言使用者视为词汇声调差异,从而导致在模糊的单词学习任务中词汇竞争激烈且学习效果降低。我们的结果与第二语言语言感知(L2LP)模型一致,该模型提出来自多个来源的细粒度声学信息以及在语言模式之间切换的能力会影响非母语语音和词汇发展。