Graduate School of Human Sciences, Osaka University 1-2 Yamadaoka, Suita-shi Osaka, 565-0871, Japan.
International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo Institutes for Advanced Study, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan.
Behav Res Methods. 2024 Oct;56(7):7374-7390. doi: 10.3758/s13428-024-02424-1. Epub 2024 May 1.
Online experiments have been transforming the field of behavioral research, enabling researchers to increase sample sizes, access diverse populations, lower the costs of data collection, and promote reproducibility. The field of developmental psychology increasingly exploits such online testing approaches. Since infants cannot give explicit behavioral responses, one key outcome measure is infants' gaze behavior. In the absence of automated eyetrackers in participants' homes, automatic gaze classification from webcam data would make it possible to avoid painstaking manual coding. However, the lack of a controlled experimental environment may lead to various noise factors impeding automatic face detection or gaze classification. We created an adult webcam dataset that systematically reproduced noise factors from infant webcam studies which might affect automated gaze coding accuracy. We varied participants' left-right offset, distance to the camera, facial rotation, and the direction of the lighting source. Running two state-of-the-art classification algorithms (iCatcher+ and OWLET) revealed that facial detection performance was particularly affected by the lighting source, while gaze coding accuracy was consistently affected by the distance to the camera and lighting source. Morphing participants' faces to be unidentifiable did not generally affect the results, suggesting facial anonymization could be used when making online video data publicly available, for purposes of further study and transparency. Our findings will guide improving study design for infant and adult participants during online experiments. Moreover, training algorithms using our dataset will allow researchers to improve robustness and allow developmental psychologists to leverage online testing more efficiently.
在线实验正在改变行为研究领域,使研究人员能够增加样本量、接触更多样化的人群、降低数据收集成本并提高可重复性。发展心理学领域越来越多地利用这种在线测试方法。由于婴儿无法给出明确的行为反应,因此一个关键的结果衡量标准是婴儿的注视行为。由于参与者家中没有自动化的眼动追踪器,因此从网络摄像头数据中自动进行注视分类将使避免繁琐的手动编码成为可能。但是,缺乏受控的实验环境可能会导致各种噪声因素干扰自动面部检测或注视分类。我们创建了一个成人网络摄像头数据集,该数据集系统地再现了可能影响自动注视编码准确性的来自婴儿网络摄像头研究的噪声因素。我们改变了参与者的左右偏移量、与摄像头的距离、面部旋转和光源方向。运行两个最先进的分类算法(iCatcher+和 OWLET)表明,面部检测性能特别受到光源的影响,而注视编码准确性则始终受到与摄像头和光源距离的影响。将参与者的面孔变形为无法识别的样子通常不会影响结果,这表明在将在线视频数据公开用于进一步研究和透明度目的时,可以使用面部匿名化。我们的研究结果将指导在线实验中婴儿和成人参与者的研究设计改进。此外,使用我们的数据集训练算法将使研究人员能够提高鲁棒性,并使发展心理学家能够更有效地利用在线测试。