Shaw Kathleen E, Bortfeld Heather
Department of Psychology, University of Connecticut Storrs, CT, USA.
Psychological Sciences, University of California, Merced Merced, CA, USA ; Haskins Laboratories New Haven, CT, USA.
Front Psychol. 2015 Dec 15;6:1844. doi: 10.3389/fpsyg.2015.01844. eCollection 2015.
Speech is a multimodal stimulus, with information provided in both the auditory and visual modalities. The resulting audiovisual signal provides relatively stable, tightly correlated cues that support speech perception and processing in a range of contexts. Despite the clear relationship between spoken language and the moving mouth that produces it, there remains considerable disagreement over how sensitive early language learners-infants-are to whether and how sight and sound co-occur. Here we examine sources of this disagreement, with a focus on how comparisons of data obtained using different paradigms and different stimuli may serve to exacerbate misunderstanding.
言语是一种多模态刺激,通过听觉和视觉两种模态提供信息。由此产生的视听信号提供了相对稳定、紧密相关的线索,这些线索在一系列情境中支持言语感知和处理。尽管口语与产生口语的活动嘴部之间存在明显的关系,但对于早期语言学习者——婴儿——对视觉和声音是否同时出现以及如何同时出现的敏感程度,仍然存在相当大的分歧。在这里,我们研究这种分歧的根源,重点关注使用不同范式和不同刺激获得的数据比较如何可能加剧误解。