Suppr超能文献

L2音素识别中L1音节结构的回声

Echoes of L1 Syllable Structure in L2 Phoneme Recognition.

作者信息

Yasufuku Kanako, Doyle Gabriel

机构信息

Department of Linguistics and Asian/Middle Eastern Languages, San Diego State University, San Diego, CA, United States.

出版信息

Front Psychol. 2021 Jul 20;12:515237. doi: 10.3389/fpsyg.2021.515237. eCollection 2021.

Abstract

Learning to move from auditory signals to phonemic categories is a crucial component of first, second, and multilingual language acquisition. In L1 and simultaneous multilingual acquisition, learners build up phonological knowledge to structure their perception within a language. For sequential multilinguals, this knowledge may support or interfere with acquiring language-specific representations for a new phonemic categorization system. Syllable structure is a part of this phonological knowledge, and language-specific syllabification preferences influence language acquisition, including early word segmentation. As a result, we expect to see language-specific syllable structure influencing speech perception as well. Initial evidence of an effect appears in Ali et al. (2011), who argued that cross-linguistic differences in McGurk fusion within a syllable reflected listeners' language-specific syllabification preferences. Building on a framework from Cho and McQueen (2006), we argue that this could reflect the Phonological-Superiority Hypothesis (differences in L1 syllabification preferences make some syllabic positions harder to classify than others) or the Phonetic-Superiority Hypothesis (the acoustic qualities of speech sounds in some positions make it difficult to perceive unfamiliar sounds). However, their design does not distinguish between these two hypotheses. The current research study extends the work of Ali et al. (2011) by testing Japanese, and adding audio-only and congruent audio-visual stimuli to test the effects of syllabification preferences beyond just McGurk fusion. Eighteen native English speakers and 18 native Japanese speakers were asked to transcribe nonsense words in an artificial language. English allows stop consonants in syllable codas while Japanese heavily restricts them, but both groups showed similar patterns of McGurk fusion in stop codas. This is inconsistent with the Phonological-Superiority Hypothesis. However, when visual information was added, the phonetic influences on transcription accuracy largely disappeared. This is inconsistent with the Phonetic-Superiority Hypothesis. We argue from these results that neither acoustic informativity nor interference of a listener's phonological knowledge is superior, and sketch a cognitively inspired rational cue integration framework as a third hypothesis to explain how L1 phonological knowledge affects L2 perception.

摘要

学会从听觉信号转换到音素类别是第一语言、第二语言和多语言习得的关键组成部分。在第一语言和同时进行的多语言习得中,学习者积累语音知识以构建他们在一种语言中的感知结构。对于顺序习得多语言的人来说,这种知识可能会支持或干扰为新的音素分类系统获取特定语言的表征。音节结构是这种语音知识的一部分,特定语言的音节划分偏好会影响语言习得,包括早期的单词分割。因此,我们预计会看到特定语言的音节结构也会影响语音感知。这种影响的初步证据出现在阿里等人(2011年)的研究中,他们认为音节内麦格克融合的跨语言差异反映了听众特定语言的音节划分偏好。基于赵和麦奎因(2006年)的一个框架,我们认为这可能反映了语音优势假说(第一语言音节划分偏好的差异使得某些音节位置比其他位置更难分类)或语音优势假说(某些位置语音的声学特性使得难以感知不熟悉的声音)。然而,他们的设计没有区分这两种假说。当前的研究通过测试日语扩展了阿里等人(2011年)的工作,并添加了仅音频和一致的视听刺激来测试音节划分偏好的影响,而不仅仅是麦格克融合。18名以英语为母语的人和18名以日语为母语的人被要求转录一种人工语言中的无意义单词。英语允许在音节尾出现塞音,而日语则严格限制,但是两组在音节尾的麦格克融合模式相似。这与语音优势假说不一致。然而,当添加视觉信息时,语音对转录准确性的影响基本消失。这与语音优势假说不一致。我们从这些结果中认为,声学信息性和听众语音知识的干扰都不占优势,并勾勒出一个受认知启发的合理线索整合框架作为第三种假说,以解释第一语言语音知识如何影响第二语言感知。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c25e/8329372/c1716d3e4d58/fpsyg-12-515237-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验