Suppr超能文献

验证一种开源的、基于远程网络的眼动追踪方法(WebGazer)在儿童早期研究中的应用。

Validation of an open source, remote web-based eye-tracking method (WebGazer) for research in early childhood.

机构信息

Department of Psychology, Ludwig-Maximilians-Universität München, München, Germany.

Facultad de Psicología, Universidad Nacional Autónoma de México, Ciudad de México, México.

出版信息

Infancy. 2024 Jan-Feb;29(1):31-55. doi: 10.1111/infa.12564. Epub 2023 Oct 18.

Abstract

Measuring eye movements remotely via the participant's webcam promises to be an attractive methodological addition to in-person eye-tracking in the lab. However, there is a lack of systematic research comparing remote web-based eye-tracking with in-lab eye-tracking in young children. We report a multi-lab study that compared these two measures in an anticipatory looking task with toddlers using WebGazer.js and jsPsych. Results of our remotely tested sample of 18-27-month-old toddlers (N = 125) revealed that web-based eye-tracking successfully captured goal-based action predictions, although the proportion of the goal-directed anticipatory looking was lower compared to the in-lab sample (N = 70). As expected, attrition rate was substantially higher in the web-based (42%) than the in-lab sample (10%). Excluding trials based on visual inspection of the match of time-locked gaze coordinates and the participant's webcam video overlayed on the stimuli was an important preprocessing step to reduce noise in the data. We discuss the use of this remote web-based method in comparison with other current methodological innovations. Our study demonstrates that remote web-based eye-tracking can be a useful tool for testing toddlers, facilitating recruitment of larger and more diverse samples; a caveat to consider is the larger drop-out rate.

摘要

通过参与者的网络摄像头远程测量眼球运动有望成为实验室中面对面眼动追踪的一种有吸引力的方法补充。然而,目前缺乏将远程网络眼动追踪与幼儿在实验室中的眼动追踪进行系统比较的研究。我们报告了一项多实验室研究,该研究使用 WebGazer.js 和 jsPsych 在预期待望任务中比较了这两种在幼儿中的测量方法。我们对 18-27 个月大的远程测试样本(N=125)的结果表明,基于网络的眼动追踪成功捕捉到了基于目标的动作预测,尽管与实验室样本(N=70)相比,目标导向的预期注视比例较低。正如预期的那样,基于网络的样本(42%)的淘汰率远高于实验室样本(10%)。根据时间锁定的注视坐标与叠加在刺激物上的参与者网络摄像头视频的匹配情况进行视觉检查,排除试验是减少数据噪声的重要预处理步骤。我们讨论了与其他当前方法创新相比,这种远程基于网络的方法的使用。我们的研究表明,远程基于网络的眼动追踪可以成为测试幼儿的有用工具,有助于招募更大、更多样化的样本;需要考虑的一个警告是较高的淘汰率。

相似文献

1
Validation of an open source, remote web-based eye-tracking method (WebGazer) for research in early childhood.
Infancy. 2024 Jan-Feb;29(1):31-55. doi: 10.1111/infa.12564. Epub 2023 Oct 18.
2
Assessing two methods of webcam-based eye-tracking for child language research.
J Child Lang. 2025 May;52(3):675-708. doi: 10.1017/S0305000924000175. Epub 2024 May 7.
5
MouseView.js: Reliable and valid attention tracking in web-based experiments using a cursor-directed aperture.
Behav Res Methods. 2022 Aug;54(4):1663-1687. doi: 10.3758/s13428-021-01703-5. Epub 2021 Sep 29.
7
Webcams as Windows to the Mind? A Direct Comparison Between In-Lab and Web-Based Eye-Tracking Methods.
Open Mind (Camb). 2024 Nov 22;8:1369-1424. doi: 10.1162/opmi_a_00171. eCollection 2024.
8
OWLET: An automated, open-source method for infant gaze tracking using smartphone and webcam recordings.
Behav Res Methods. 2023 Sep;55(6):3149-3163. doi: 10.3758/s13428-022-01962-w. Epub 2022 Sep 7.
9
Comparing Online Webcam- and Laboratory-Based Eye-Tracking for the Assessment of Infants' Audio-Visual Synchrony Perception.
Front Psychol. 2022 Jan 11;12:733933. doi: 10.3389/fpsyg.2021.733933. eCollection 2021.
10
eyeScrollR: A software method for reproducible mapping of eye-tracking data from scrollable web pages.
Behav Res Methods. 2024 Apr;56(4):3380-3395. doi: 10.3758/s13428-024-02343-1. Epub 2024 Feb 12.

引用本文的文献

1
The Feasibility of Remote Visual-World Eye-Tracking With Young Children.
Open Mind (Camb). 2025 Jul 26;9:992-1019. doi: 10.1162/opmi.a.16. eCollection 2025.
2
Automated Infant Eye Tracking: A Systematic Historical Review.
Infancy. 2025 Jul-Aug;30(4):e70031. doi: 10.1111/infa.70031.
3
The acceptability and validity of AI-generated psycholinguistic stimuli.
Heliyon. 2025 Jan 17;11(2):e42083. doi: 10.1016/j.heliyon.2025.e42083. eCollection 2025 Jan 30.
4
The fundamentals of eye tracking part 4: Tools for conducting an eye tracking study.
Behav Res Methods. 2025 Jan 6;57(1):46. doi: 10.3758/s13428-024-02529-7.
5
Quantifying Arm and Leg Movements in 3-Month-Old Infants Using Pose Estimation: Proof of Concept.
Sensors (Basel). 2024 Nov 27;24(23):7586. doi: 10.3390/s24237586.
6
Attrition in a large-scale habituation task administered at home.
Br J Dev Psychol. 2025 Mar;43(1):124-138. doi: 10.1111/bjdp.12528. Epub 2024 Oct 25.
7
Conducting Developmental Research Online vs. In-Person: A Meta-Analysis.
Open Mind (Camb). 2024 Jun 12;8:795-808. doi: 10.1162/opmi_a_00147. eCollection 2024.
8
Closing the eye-tracking gap in reading research.
Front Psychol. 2024 Jun 3;15:1425219. doi: 10.3389/fpsyg.2024.1425219. eCollection 2024.
10
False belief understanding in deaf children: what are the difficulties?
Front Psychol. 2024 Jan 18;15:1238505. doi: 10.3389/fpsyg.2024.1238505. eCollection 2024.

本文引用的文献

1
Conducting Developmental Research Online vs. In-Person: A Meta-Analysis.
Open Mind (Camb). 2024 Jun 12;8:795-808. doi: 10.1162/opmi_a_00147. eCollection 2024.
3
e-Babylab: An open-source browser-based tool for unmoderated online developmental studies.
Behav Res Methods. 2024 Aug;56(5):4530-4552. doi: 10.3758/s13428-023-02200-7. Epub 2023 Aug 24.
6
7
Maximizing valid eye-tracking data in human and macaque infants by optimizing calibration and adjusting areas of interest.
Behav Res Methods. 2024 Feb;56(2):881-907. doi: 10.3758/s13428-022-02056-3. Epub 2023 Mar 8.
8
OWLET: An automated, open-source method for infant gaze tracking using smartphone and webcam recordings.
Behav Res Methods. 2023 Sep;55(6):3149-3163. doi: 10.3758/s13428-022-01962-w. Epub 2022 Sep 7.
9
Eye tracking: empirical foundations for a minimal reporting guideline.
Behav Res Methods. 2023 Jan;55(1):364-416. doi: 10.3758/s13428-021-01762-8. Epub 2022 Apr 6.
10
Improving the generalizability of infant psychological research: The ManyBabies model.
Behav Brain Sci. 2022 Feb 10;45:e35. doi: 10.1017/S0140525X21000455.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验