Suppr超能文献

利用同步眼动追踪和运动捕捉技术生成精确的 3D 注视向量。

Generating accurate 3D gaze vectors using synchronized eye tracking and motion capture.

机构信息

Department of Psychology, University of Alberta, Edmonton, Alberta, Canada.

Neuroscience and Mental Health Institute, University of Alberta, Edmonton, Alberta, Canada.

出版信息

Behav Res Methods. 2024 Jan;56(1):18-31. doi: 10.3758/s13428-022-01958-6. Epub 2022 Sep 9.

Abstract

Assessing gaze behavior during real-world tasks is difficult; dynamic bodies moving through dynamic worlds make gaze analysis difficult. Current approaches involve laborious coding of pupil positions. In settings where motion capture and mobile eye tracking are used concurrently in naturalistic tasks, it is critical that data collection be simple, efficient, and systematic. One solution is to combine eye tracking with motion capture to generate 3D gaze vectors. When combined with tracked or known object locations, 3D gaze vector generation can be automated. Here we use combined eye and motion capture and explore how linear regression models generate accurate 3D gaze vectors. We compare spatial accuracy of models derived from four short calibration routines across three pupil data inputs: the efficacy of calibration routines was assessed, a validation task requiring short fixations on task-relevant locations, and a naturalistic object interaction task to bridge the gap between laboratory and "in the wild" studies. Further, we generated and compared models using spherical and Cartesian coordinate systems and monocular (left or right) or binocular data. All calibration routines performed similarly, with the best performance (i.e., sub-centimeter errors) coming from the naturalistic task trials when the participant is looking at an object in front of them. We found that spherical coordinate systems generate the most accurate gaze vectors with no differences in accuracy when using monocular or binocular data. Overall, we recommend 1-min calibration routines using binocular pupil data combined with a spherical world coordinate system to produce the highest-quality gaze vectors.

摘要

评估真实世界任务中的注视行为很困难;动态的身体在动态的世界中移动使得注视分析变得困难。目前的方法涉及到费力的瞳孔位置编码。在自然任务中同时使用运动捕捉和移动眼动追踪的情况下,数据收集必须简单、高效和系统。一种解决方案是将眼动追踪与运动捕捉相结合,生成 3D 注视向量。当与跟踪或已知的物体位置结合使用时,3D 注视向量生成可以实现自动化。在这里,我们使用结合了眼动和运动捕捉的数据,探索线性回归模型如何生成准确的 3D 注视向量。我们比较了从四个短校准例程中得出的模型在三个瞳孔数据输入中的空间准确性:评估了校准例程的有效性,一项需要在与任务相关的位置进行短暂注视的验证任务,以及一项自然的物体交互任务,以弥合实验室和“野外”研究之间的差距。此外,我们生成并比较了使用球形和笛卡尔坐标系以及单眼(左眼或右眼)或双眼数据的模型。所有校准例程的性能都相似,在参与者注视他们面前的物体时,来自自然任务试验的表现最佳(即亚厘米级误差)。我们发现,球形坐标系生成的注视向量最准确,使用单眼或双眼数据时,准确性没有差异。总体而言,我们建议使用双眼瞳孔数据和球形世界坐标系进行 1 分钟的校准例程,以生成最高质量的注视向量。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验