Luchkina Elena, Simon Leah R, Waxman Sandra R
Department of Psychology, Harvard University, Cambridge, MA, USA.
Department of Psychology, University of Maryland, College Park, MD, USA.
Behav Res Methods. 2025 Apr 28;57(6):158. doi: 10.3758/s13428-025-02683-6.
Eye-tracking measures, which provide crucial insight into the processes underlying human language cognition, perception, and social behavior, are particularly important in research with preverbal infants. Until recently, infant eye-gaze analysis required either expensive corneal-reflection eye-tracking technology or labor-intensive manual annotation (coding). Fortunately, iCatcher+, a recently developed AI-based automated gaze annotation tool, promises to reduce these expenses. To adopt this tool as a mainstream tool for gaze annotation, it is key to determine how annotations produced by iCatcher+ compare to the annotations produced by trained human coders. Here, we provide such a comparison, using 288 videos from a word-learning experiment with 12-month-olds. We evaluate the agreement between these two annotation systems and the effects identified using each system. We find that (1) agreement between human-coded and iCatcher+-annotated video data is excellent (88%) and comparable to intercoder agreement among human coders (90%), and (2) both annotation systems yield the same patterns of effects. This provides strong assurances that iCatcher+ is a viable alternative to manual annotation of infant gaze, one that holds promise for increasing efficiency, reducing the costs, and broadening the empirical base in infant eye-tracking.
眼动追踪测量能够为人类语言认知、感知和社会行为背后的过程提供关键见解,在针对尚不能言语的婴儿的研究中尤为重要。直到最近,婴儿眼动注视分析要么需要昂贵的角膜反射眼动追踪技术,要么需要耗费大量人力的手动标注(编码)。幸运的是,iCatcher+,一种最近开发的基于人工智能的自动注视标注工具,有望降低这些成本。要将此工具用作注视标注的主流工具,关键在于确定iCatcher+生成的标注与训练有素的人类编码员生成的标注相比如何。在此,我们使用来自一项针对12个月大婴儿的单词学习实验的288个视频进行了这样的比较。我们评估了这两个标注系统之间的一致性以及使用每个系统所识别出的效果。我们发现:(1)人工编码和iCatcher+标注的视频数据之间的一致性非常好(88%),与人类编码员之间的编码间一致性(90%)相当;(2)两个标注系统产生的效果模式相同。这有力地保证了iCatcher+是婴儿注视手动标注的可行替代方案,有望提高效率、降低成本并拓宽婴儿眼动追踪的实证基础。