National Technical Institute for the Deaf, Rochester Institute of Technology, Rochester, NY, USA.
Department of Psychology, University of California, San Diego, CA, USA.
Dev Sci. 2021 Jul;24(4):e13086. doi: 10.1111/desc.13086. Epub 2021 Mar 2.
Children's gaze behavior reflects emergent linguistic knowledge and real-time language processing of speech, but little is known about naturalistic gaze behaviors while watching signed narratives. Measuring gaze patterns in signing children could uncover how they master perceptual gaze control during a time of active language learning. Gaze patterns were recorded using a Tobii X120 eye tracker, in 31 non-signing and 30 signing hearing infants (5-14 months) and children (2-8 years) as they watched signed narratives on video. Intelligibility of the signed narratives was manipulated by presenting them naturally and in video-reversed ("low intelligibility") conditions. This video manipulation was used because it distorts semantic content, while preserving most surface phonological features. We examined where participants looked, using linear mixed models with Language Group (non-signing vs. signing) and Video Condition (Forward vs. Reversed), controlling for trial order. Non-signing infants and children showed a preference to look at the face as well as areas below the face, possibly because their gaze was drawn to the moving articulators in signing space. Native signing infants and children demonstrated resilient, face-focused gaze behavior. Moreover, their gaze behavior was unchanged for video-reversed signed narratives, similar to what was seen for adult native signers, possibly because they already have efficient highly focused gaze behavior. The present study demonstrates that human perceptual gaze control is sensitive to visual language experience over the first year of life and emerges early, by 6 months of age. Results have implications for the critical importance of early visual language exposure for deaf infants. A video abstract of this article can be viewed at https://www.youtube.com/watch?v=2ahWUluFAAg.
儿童的注视行为反映了新兴的语言知识和实时语言处理能力,但对于观看手语叙事时的自然注视行为却知之甚少。测量手语儿童的注视模式可以揭示他们在积极学习语言的过程中如何掌握感知性注视控制。使用 Tobii X120 眼动追踪器记录了 31 名非手语和 30 名手语听力婴儿(5-14 个月)和儿童(2-8 岁)在观看视频中的手语叙事时的注视模式。通过以自然和视频反转(“低可懂度”)两种条件呈现手语叙事来操纵其可理解性。这种视频处理用于扭曲语义内容,同时保留大部分表面语音特征。我们使用线性混合模型,使用语言组(非手语与手语)和视频条件(正向与反转)来检查参与者的注视位置,同时控制试验顺序。非手语婴儿和儿童表现出喜欢看脸和脸以下区域的倾向,这可能是因为他们的目光被手语空间中的运动发音器吸引。母语为手语的婴儿和儿童表现出有弹性的、专注于脸部的注视行为。此外,他们的注视行为对于视频反转的手语叙事没有变化,与成年母语手语者相似,这可能是因为他们已经具有高效、高度集中的注视行为。本研究表明,人类的感知性注视控制对生命第一年的视觉语言体验敏感,并且在 6 个月大时就已经出现,这突显了早期视觉语言接触对聋儿的重要性。本文的视频摘要可在 https://www.youtube.com/watch?v=2ahWUluFAAg 观看。