Kao Chieh, Zhang Yang
Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis.
Center for Neurobehavioral Development, University of Minnesota, Minneapolis.
J Speech Lang Hear Res. 2020 Aug 10;63(8):2508-2521. doi: 10.1044/2020_JSLHR-19-00329. Epub 2020 Jul 13.
Purpose Spoken language is inherently multimodal and multidimensional in natural settings, but very little is known about how second language (L2) learners undertake multilayered speech signals with both phonetic and affective cues. This study investigated how late L2 learners undertake parallel processing of linguistic and affective information in the speech signal at behavioral and neurophysiological levels. Method Behavioral and event-related potential measures were taken in a selective cross-modal priming paradigm to examine how late L2 learners ( = 24, = 25.54 years) assessed the congruency of phonetic (target vowel: /a/ or /i/) and emotional (target affect: happy or angry) information between the visual primes of facial pictures and the auditory targets of spoken syllables. Results Behavioral accuracy data showed a significant congruency effect in affective (but not phonetic) priming. Unlike a previous report on monolingual first language (L1) users, the L2 users showed no facilitation in reaction time for congruency detection in either selective priming task. The neurophysiological results revealed a robust N400 response that was stronger in the phonetic condition but without clear lateralization and that the N400 effect was weaker in late L2 listeners than in monolingual L1 listeners. Following the N400, late L2 learners showed a weaker late positive response than the monolingual L1 users, particularly in the left central to posterior electrode regions. Conclusions The results demonstrate distinct patterns of behavioral and neural processing of phonetic and affective information in L2 speech with reduced neural representations in both the N400 and the later processing stage, and they provide an impetus for further research on similarities and differences in L1 and L2 multisensory speech perception in bilingualism.
目的 在自然环境中,口语本质上是多模态和多维度的,但对于第二语言(L2)学习者如何处理带有语音和情感线索的多层语音信号,我们知之甚少。本研究调查了晚期L2学习者如何在行为和神经生理水平上对语音信号中的语言和情感信息进行并行处理。方法 在选择性跨模态启动范式中采用行为和事件相关电位测量,以检验晚期L2学习者(n = 24,平均年龄 = 25.54岁)如何评估面部图片的视觉启动刺激与口语音节的听觉目标之间语音(目标元音:/a/或/i/)和情感(目标情感:高兴或愤怒)信息的一致性。结果 行为准确性数据显示在情感(而非语音)启动中有显著的一致性效应。与之前关于单语第一语言(L1)使用者的报告不同,L2使用者在任何一个选择性启动任务中的一致性检测反应时间上都没有表现出促进作用。神经生理结果显示出一个强大的N400反应,在语音条件下更强,但没有明显的偏侧化,并且晚期L2听者的N400效应比单语L1听者弱。在N400之后,晚期L2学习者比单语L1使用者表现出更弱的晚期正反应,特别是在左中央到后部电极区域。结论 结果表明,L2语音中语音和情感信息的行为和神经处理模式不同,在N400和后期处理阶段神经表征都有所减少,它们为进一步研究双语中L1和L2多感官语音感知的异同提供了动力。