Suppr超能文献

视觉发音信息对非母语语音辨别神经关联的影响。

The Effect of Visual Articulatory Information on the Neural Correlates of Non-native Speech Sound Discrimination.

作者信息

Plumridge James M A, Barham Michael P, Foley Denise L, Ware Anna T, Clark Gillian M, Albein-Urios Natalia, Hayden Melissa J, Lum Jarrad A G

机构信息

Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, VIC, Australia.

出版信息

Front Hum Neurosci. 2020 Feb 7;14:25. doi: 10.3389/fnhum.2020.00025. eCollection 2020.

Abstract

Behavioral studies have shown that the ability to discriminate between non-native speech sounds improves after seeing how the sounds are articulated. This study examined the influence of visual articulatory information on the neural correlates of non-native speech sound discrimination. English speakers' discrimination of the Hindi dental and retroflex sounds was measured using the mismatch negativity (MMN) event-related potential, before and after they completed one of three 8-min training conditions. In an audio-visual speech training condition ( = 14), each sound was presented with its corresponding visual articulation. In one control condition ( = 14), both sounds were presented with the same visual articulation, resulting in one congruent and one incongruent audio-visual pairing. In another control condition ( = 14), both sounds were presented with the same image of a still face. The control conditions aimed to rule out the possibility that the MMN is influenced by non-specific audio-visual pairings, or by general exposure to the dental and retroflex sounds over the course of the study. The results showed that audio-visual speech training reduced the latency of the MMN but did not affect MMN amplitude. No change in MMN amplitude or latency was observed for the two control conditions. The pattern of results suggests that a relatively short audio-visual speech training session (i.e., 8 min) may increase the speed with which the brain processes non-native speech sound contrasts. The absence of a training effect on MMN amplitude suggests a single session of audio-visual speech training does not lead to the formation of more discrete memory traces for non-native speech sounds. Longer and/or multiple sessions might be needed to influence the MMN amplitude.

摘要

行为研究表明,在了解非母语语音的发音方式后,辨别这些语音的能力会有所提高。本研究考察了视觉发音信息对非母语语音辨别神经关联的影响。以英语为母语者在完成三种8分钟训练条件之一之前和之后,使用失配负波(MMN)事件相关电位来测量他们对印地语齿音和卷舌音的辨别能力。在视听语音训练条件下( = 14),每个声音都与相应的视觉发音一起呈现。在一种对照条件下( = 14),两种声音都与相同的视觉发音一起呈现,从而产生一组一致和一组不一致的视听配对。在另一种对照条件下( = 14),两种声音都与同一张静态面部图像一起呈现。对照条件旨在排除MMN受非特定视听配对影响的可能性,或排除在研究过程中对齿音和卷舌音的一般接触的影响。结果表明,视听语音训练缩短了MMN的潜伏期,但未影响MMN的波幅。在两种对照条件下,未观察到MMN波幅或潜伏期的变化。结果模式表明,相对较短的视听语音训练课程(即8分钟)可能会提高大脑处理非母语语音对比的速度。对MMN波幅没有训练效果表明,单次视听语音训练不会导致为非母语语音形成更离散的记忆痕迹。可能需要更长时间和/或多次训练才能影响MMN波幅。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验