• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

利用同步眼动追踪和运动捕捉技术生成精确的 3D 注视向量。

Generating accurate 3D gaze vectors using synchronized eye tracking and motion capture.

机构信息

Department of Psychology, University of Alberta, Edmonton, Alberta, Canada.

Neuroscience and Mental Health Institute, University of Alberta, Edmonton, Alberta, Canada.

出版信息

Behav Res Methods. 2024 Jan;56(1):18-31. doi: 10.3758/s13428-022-01958-6. Epub 2022 Sep 9.

DOI:10.3758/s13428-022-01958-6
PMID:36085543
Abstract

Assessing gaze behavior during real-world tasks is difficult; dynamic bodies moving through dynamic worlds make gaze analysis difficult. Current approaches involve laborious coding of pupil positions. In settings where motion capture and mobile eye tracking are used concurrently in naturalistic tasks, it is critical that data collection be simple, efficient, and systematic. One solution is to combine eye tracking with motion capture to generate 3D gaze vectors. When combined with tracked or known object locations, 3D gaze vector generation can be automated. Here we use combined eye and motion capture and explore how linear regression models generate accurate 3D gaze vectors. We compare spatial accuracy of models derived from four short calibration routines across three pupil data inputs: the efficacy of calibration routines was assessed, a validation task requiring short fixations on task-relevant locations, and a naturalistic object interaction task to bridge the gap between laboratory and "in the wild" studies. Further, we generated and compared models using spherical and Cartesian coordinate systems and monocular (left or right) or binocular data. All calibration routines performed similarly, with the best performance (i.e., sub-centimeter errors) coming from the naturalistic task trials when the participant is looking at an object in front of them. We found that spherical coordinate systems generate the most accurate gaze vectors with no differences in accuracy when using monocular or binocular data. Overall, we recommend 1-min calibration routines using binocular pupil data combined with a spherical world coordinate system to produce the highest-quality gaze vectors.

摘要

评估真实世界任务中的注视行为很困难;动态的身体在动态的世界中移动使得注视分析变得困难。目前的方法涉及到费力的瞳孔位置编码。在自然任务中同时使用运动捕捉和移动眼动追踪的情况下,数据收集必须简单、高效和系统。一种解决方案是将眼动追踪与运动捕捉相结合,生成 3D 注视向量。当与跟踪或已知的物体位置结合使用时,3D 注视向量生成可以实现自动化。在这里,我们使用结合了眼动和运动捕捉的数据,探索线性回归模型如何生成准确的 3D 注视向量。我们比较了从四个短校准例程中得出的模型在三个瞳孔数据输入中的空间准确性:评估了校准例程的有效性,一项需要在与任务相关的位置进行短暂注视的验证任务,以及一项自然的物体交互任务,以弥合实验室和“野外”研究之间的差距。此外,我们生成并比较了使用球形和笛卡尔坐标系以及单眼(左眼或右眼)或双眼数据的模型。所有校准例程的性能都相似,在参与者注视他们面前的物体时,来自自然任务试验的表现最佳(即亚厘米级误差)。我们发现,球形坐标系生成的注视向量最准确,使用单眼或双眼数据时,准确性没有差异。总体而言,我们建议使用双眼瞳孔数据和球形世界坐标系进行 1 分钟的校准例程,以生成最高质量的注视向量。

相似文献

1
Generating accurate 3D gaze vectors using synchronized eye tracking and motion capture.利用同步眼动追踪和运动捕捉技术生成精确的 3D 注视向量。
Behav Res Methods. 2024 Jan;56(1):18-31. doi: 10.3758/s13428-022-01958-6. Epub 2022 Sep 9.
2
Noise estimation for head-mounted 3D binocular eye tracking using Pupil Core eye-tracking goggles.使用瞳孔核心眼动追踪护目镜对头戴式 3D 双目眼动追踪进行噪声估计。
Behav Res Methods. 2024 Jan;56(1):53-79. doi: 10.3758/s13428-023-02150-0. Epub 2023 Jun 27.
3
An Integrated Eye-Tracking and Motion Capture System in Synchronized Gaze and Movement Analysis.同步注视与运动分析中的眼动追踪与运动捕捉集成系统。
IEEE Int Conf Rehabil Robot. 2023 Sep;2023:1-6. doi: 10.1109/ICORR58425.2023.10304692.
4
High-Accuracy 3D Gaze Estimation with Efficient Recalibration for Head-Mounted Gaze Tracking Systems.高效重标定的高精度 3D 注视估计用于头戴式注视跟踪系统。
Sensors (Basel). 2022 Jun 8;22(12):4357. doi: 10.3390/s22124357.
5
Gaze estimation interpolation methods based on binocular data.基于双目数据的注视估计插值方法。
IEEE Trans Biomed Eng. 2012 Aug;59(8):2235-2243. doi: 10.1109/TBME.2012.2201716. Epub 2012 May 30.
6
3D Gaze Estimation Using RGB-IR Cameras.基于 RGB-IR 相机的 3D 注视估计。
Sensors (Basel). 2022 Dec 29;23(1):381. doi: 10.3390/s23010381.
7
Gaze-in-wild: A dataset for studying eye and head coordination in everyday activities.凝视自然:用于研究日常活动中眼睛和头部协调的数据集。
Sci Rep. 2020 Feb 13;10(1):2539. doi: 10.1038/s41598-020-59251-5.
8
A novel gaze tracking method based on the generation of virtual calibration points.基于虚拟校准点生成的新型视线跟踪方法。
Sensors (Basel). 2013 Aug 16;13(8):10802-22. doi: 10.3390/s130810802.
9
Pupil Response in Visual Tracking Tasks: The Impacts of Task Load, Familiarity, and Gaze Position.视觉跟踪任务中的瞳孔反应:任务负荷、熟悉程度和注视位置的影响
Sensors (Basel). 2024 Apr 16;24(8):2545. doi: 10.3390/s24082545.
10
Estimation of Gaze Detection Accuracy Using the Calibration Information-Based Fuzzy System.基于校准信息的模糊系统对注视检测准确率的估计
Sensors (Basel). 2016 Jan 5;16(1):60. doi: 10.3390/s16010060.

引用本文的文献

1
Exploring the impact of myoelectric prosthesis controllers on visuomotor behavior.探索肌电假肢控制器对视觉运动行为的影响。
J Neuroeng Rehabil. 2025 Mar 12;22(1):57. doi: 10.1186/s12984-025-01604-0.
2
The fundamentals of eye tracking part 4: Tools for conducting an eye tracking study.眼动追踪基础 第4部分:进行眼动追踪研究的工具。
Behav Res Methods. 2025 Jan 6;57(1):46. doi: 10.3758/s13428-024-02529-7.

本文引用的文献

1
Reaching for known unknowns: Rapid reach decisions accurately reflect the future state of dynamic probabilistic information.伸手去够已知的未知:快速伸手决策准确反映动态概率信息的未来状态。
Cortex. 2021 May;138:253-265. doi: 10.1016/j.cortex.2021.02.010. Epub 2021 Feb 24.
2
Gaze and Movement Assessment (GaMA): Inter-site validation of a visuomotor upper limb functional protocol.注视与运动评估(GaMA):一种视动性上肢功能协议的现场间验证。
PLoS One. 2019 Dec 30;14(12):e0219333. doi: 10.1371/journal.pone.0219333. eCollection 2019.
3
Quantitative Eye Gaze and Movement Differences in Visuomotor Adaptations to Varying Task Demands Among Upper-Extremity Prosthesis Users.
上肢假肢使用者在不同任务需求下的视觉运动适应中的定量眼球注视和运动差异。
JAMA Netw Open. 2019 Sep 4;2(9):e1911197. doi: 10.1001/jamanetworkopen.2019.11197.
4
DeepLabCut: markerless pose estimation of user-defined body parts with deep learning.DeepLabCut:基于深度学习的用户自定义身体部位无标记姿态估计。
Nat Neurosci. 2018 Sep;21(9):1281-1289. doi: 10.1038/s41593-018-0209-y. Epub 2018 Aug 20.
5
Using synchronized eye and motion tracking to determine high-precision eye-movement patterns during object-interaction tasks.使用同步的眼睛和运动跟踪来确定物体交互任务期间的高精度眼动模式。
J Vis. 2018 Jun 1;18(6):18. doi: 10.1167/18.6.18.
6
Characterization of normative hand movements during two functional upper limb tasks.规范上肢两种功能任务中的手部运动特征。
PLoS One. 2018 Jun 21;13(6):e0199549. doi: 10.1371/journal.pone.0199549. eCollection 2018.
7
Accuracy of human motion capture systems for sport applications; state-of-the-art review.运动应用中人体运动捕捉系统的准确性:最新综述。
Eur J Sport Sci. 2018 Jul;18(6):806-819. doi: 10.1080/17461391.2018.1463397. Epub 2018 May 9.
8
Cluster-based upper body marker models for three-dimensional kinematic analysis: Comparison with an anatomical model and reliability analysis.用于三维运动学分析的基于簇的上身标记模型:与解剖学模型的比较及可靠性分析
J Biomech. 2018 Apr 27;72:228-234. doi: 10.1016/j.jbiomech.2018.02.028. Epub 2018 Feb 27.
9
Examining the Spatiotemporal Disruption to Gaze When Using a Myoelectric Prosthetic Hand.使用肌电假手时注视的时空干扰研究
J Mot Behav. 2018 Jul-Aug;50(4):416-425. doi: 10.1080/00222895.2017.1363703. Epub 2017 Sep 19.
10
Mobile gaze tracking system for outdoor walking behavioral studies.用于户外行走行为研究的移动视线跟踪系统。
J Vis. 2016;16(3):27. doi: 10.1167/16.3.27.