• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

MLGaze:基于机器学习的消费级眼动追踪系统中注视误差模式分析

MLGaze: Machine Learning-Based Analysis of Gaze Error Patterns in Consumer Eye Tracking Systems.

作者信息

Kar Anuradha

机构信息

École Normale Supérieure de Lyon, 46 Allée d'Italie, 69007 Lyon, ‎France.

出版信息

Vision (Basel). 2020 May 7;4(2):25. doi: 10.3390/vision4020025.

DOI:10.3390/vision4020025
PMID:32392760
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7355841/
Abstract

Analyzing the gaze accuracy characteristics of an eye tracker is a critical task as its gaze data is frequently affected by non-ideal operating conditions in various consumer eye tracking applications. In previous research on pattern analysis of gaze data, efforts were made to model human visual behaviors and cognitive processes. What remains relatively unexplored are questions related to identifying gaze error sources as well as quantifying and modeling their impacts on the data quality of eye trackers. In this study, gaze error patterns produced by a commercial eye tracking device were studied with the help of machine learning algorithms, such as classifiers and regression models. Gaze data were collected from a group of participants under multiple conditions that commonly affect eye trackers operating on desktop and handheld platforms. These conditions (referred here as error sources) include user distance, head pose, and eye-tracker pose variations, and the collected gaze data were used to train the classifier and regression models. It was seen that while the impact of the different error sources on gaze data characteristics were nearly impossible to distinguish by visual inspection or from data statistics, machine learning models were successful in identifying the impact of the different error sources and predicting the variability in gaze error levels due to these conditions. The objective of this study was to investigate the efficacy of machine learning methods towards the detection and prediction of gaze error patterns, which would enable an in-depth understanding of the data quality and reliability of eye trackers under unconstrained operating conditions. Coding resources for all the machine learning methods adopted in this study were included in an open repository named MLGaze to allow researchers to replicate the principles presented here using data from their own eye trackers.

摘要

分析眼动仪的注视准确性特征是一项关键任务,因为在各种消费级眼动追踪应用中,其注视数据经常受到非理想操作条件的影响。在以往关于注视数据分析模式的研究中,人们致力于对人类视觉行为和认知过程进行建模。相对而言尚未得到充分探索的是与识别注视误差源以及量化和建模它们对眼动仪数据质量的影响相关的问题。在本研究中,借助机器学习算法(如分类器和回归模型)对一款商用眼动追踪设备产生的注视误差模式进行了研究。在多种通常会影响在桌面和手持平台上运行的眼动仪的条件下,从一组参与者那里收集了注视数据。这些条件(在此称为误差源)包括用户距离、头部姿势以及眼动仪姿势变化,并且所收集的注视数据被用于训练分类器和回归模型。可以看出,虽然通过目视检查或数据统计几乎无法区分不同误差源对注视数据特征的影响,但机器学习模型成功地识别了不同误差源的影响,并预测了由于这些条件导致的注视误差水平的变化。本研究的目的是调查机器学习方法在检测和预测注视误差模式方面的有效性,这将有助于深入了解在无约束操作条件下眼动仪的数据质量和可靠性。本研究中采用的所有机器学习方法的编码资源都包含在一个名为MLGaze的开放存储库中,以便研究人员能够使用他们自己眼动仪的数据来复制此处呈现的原理。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/d535d69bce17/vision-04-00025-g017a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/c3f783dc9ca5/vision-04-00025-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/238eccce4d2e/vision-04-00025-g0A2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/5a234c0bc4d6/vision-04-00025-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/c6e02a0e2607/vision-04-00025-g002a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/345fa587883c/vision-04-00025-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/9592526644c0/vision-04-00025-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/a3d257598439/vision-04-00025-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/6dc8792853a8/vision-04-00025-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/bc892eca39fc/vision-04-00025-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/e9baddd04fde/vision-04-00025-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/23fee3b68d38/vision-04-00025-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/dc1607eeaf27/vision-04-00025-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/756925b6f97c/vision-04-00025-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/147d89648f30/vision-04-00025-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/28ee11a70834/vision-04-00025-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/a2f3d5bd4cf1/vision-04-00025-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/442edf909946/vision-04-00025-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/b00e558e8d98/vision-04-00025-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/d535d69bce17/vision-04-00025-g017a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/c3f783dc9ca5/vision-04-00025-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/238eccce4d2e/vision-04-00025-g0A2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/5a234c0bc4d6/vision-04-00025-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/c6e02a0e2607/vision-04-00025-g002a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/345fa587883c/vision-04-00025-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/9592526644c0/vision-04-00025-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/a3d257598439/vision-04-00025-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/6dc8792853a8/vision-04-00025-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/bc892eca39fc/vision-04-00025-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/e9baddd04fde/vision-04-00025-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/23fee3b68d38/vision-04-00025-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/dc1607eeaf27/vision-04-00025-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/756925b6f97c/vision-04-00025-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/147d89648f30/vision-04-00025-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/28ee11a70834/vision-04-00025-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/a2f3d5bd4cf1/vision-04-00025-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/442edf909946/vision-04-00025-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/b00e558e8d98/vision-04-00025-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8917/7355841/d535d69bce17/vision-04-00025-g017a.jpg

相似文献

1
MLGaze: Machine Learning-Based Analysis of Gaze Error Patterns in Consumer Eye Tracking Systems.MLGaze:基于机器学习的消费级眼动追踪系统中注视误差模式分析
Vision (Basel). 2020 May 7;4(2):25. doi: 10.3390/vision4020025.
2
Development of Open-source Software and Gaze Data Repositories for Performance Evaluation of Eye Tracking Systems.用于眼动追踪系统性能评估的开源软件和注视数据存储库的开发。
Vision (Basel). 2019 Oct 22;3(4):55. doi: 10.3390/vision3040055.
3
Performance Evaluation Strategies for Eye Gaze Estimation Systems with Quantitative Metrics and Visualizations.眼动追踪系统的性能评估策略:定量指标与可视化
Sensors (Basel). 2018 Sep 18;18(9):3151. doi: 10.3390/s18093151.
4
From lab-based studies to eye-tracking in virtual and real worlds: conceptual and methodological problems and solutions. Symposium 4 at the 20th European Conference on Eye Movement Research (ECEM) in Alicante, 20.8.2019.从基于实验室的研究到虚拟和现实世界中的眼动追踪:概念与方法问题及解决方案。2019年8月20日于阿利坎特举行的第20届欧洲眼动研究会议(ECEM)上的研讨会4。
J Eye Mov Res. 2019 Nov 25;12(7). doi: 10.16910/jemr.12.7.8.
5
Behavioral Activity Recognition Based on Gaze Ethograms.基于注视行为图谱的行为活动识别。
Int J Neural Syst. 2020 Jul;30(7):2050025. doi: 10.1142/S0129065720500252. Epub 2020 Jun 9.
6
Hybrid Eye-Tracking on a Smartphone with CNN Feature Extraction and an Infrared 3D Model.智能手机上的混合眼动追踪:基于 CNN 特征提取和红外 3D 模型。
Sensors (Basel). 2020 Jan 19;20(2):543. doi: 10.3390/s20020543.
7
Investigating the link between radiologists' gaze, diagnostic decision, and image content.探讨放射科医生的注视、诊断决策与图像内容之间的关系。
J Am Med Inform Assoc. 2013 Nov-Dec;20(6):1067-75. doi: 10.1136/amiajnl-2012-001503. Epub 2013 Jun 20.
8
The impact of slippage on the data quality of head-worn eye trackers.头戴式眼动仪中滑动对数据质量的影响。
Behav Res Methods. 2020 Jun;52(3):1140-1160. doi: 10.3758/s13428-019-01307-0.
9
An investigation of the distribution of gaze estimation errors in head mounted gaze trackers using polynomial functions.使用多项式函数对头戴式视线追踪器中视线估计误差分布的研究。
J Eye Mov Res. 2018 Jun 30;11(3). doi: 10.16910/jemr.11.3.5.
10
Using Deep Learning to Increase Eye-Tracking Robustness, Accuracy, and Precision in Virtual Reality.利用深度学习提高虚拟现实中眼动追踪的稳健性、准确性和精确性。
Proc ACM Comput Graph Interact Tech. 2024 May;7(2). doi: 10.1145/3654705. Epub 2024 May 17.

引用本文的文献

1
Predicting Behaviour Patterns in Online and PDF Magazines with AI Eye-Tracking.利用人工智能眼动追踪技术预测在线杂志和PDF杂志中的行为模式。
Behav Sci (Basel). 2024 Aug 5;14(8):677. doi: 10.3390/bs14080677.
2
I DARE: IULM Dataset of Affective Responses.我敢:IULM情感反应数据集。
Front Hum Neurosci. 2024 Mar 20;18:1347327. doi: 10.3389/fnhum.2024.1347327. eCollection 2024.
3
Application of Eye Tracking Technology in Aviation, Maritime, and Construction Industries: A Systematic Review.眼动追踪技术在航空、航海和建筑行业的应用:系统评价。

本文引用的文献

1
Machine learning algorithm validation with a limited sample size.机器学习算法在有限样本量下的验证。
PLoS One. 2019 Nov 7;14(11):e0224365. doi: 10.1371/journal.pone.0224365. eCollection 2019.
2
Performance Evaluation Strategies for Eye Gaze Estimation Systems with Quantitative Metrics and Visualizations.眼动追踪系统的性能评估策略:定量指标与可视化
Sensors (Basel). 2018 Sep 18;18(9):3151. doi: 10.3390/s18093151.
3
Variable selection in omics data: A practical evaluation of small sample sizes.组学数据中的变量选择:小样本量的实际评估。
Sensors (Basel). 2021 Jun 23;21(13):4289. doi: 10.3390/s21134289.
PLoS One. 2018 Jun 21;13(6):e0197910. doi: 10.1371/journal.pone.0197910. eCollection 2018.
4
A new and general approach to signal denoising and eye movement classification based on segmented linear regression.基于分段线性回归的信号去噪和眼动分类新方法
Sci Rep. 2017 Dec 18;7(1):17726. doi: 10.1038/s41598-017-17983-x.
5
Using machine learning to detect events in eye-tracking data.使用机器学习检测眼动追踪数据中的事件。
Behav Res Methods. 2018 Feb;50(1):160-181. doi: 10.3758/s13428-017-0860-3.
6
The Proximal Trajectory Algorithm in SVM Cross Validation.SVM 交叉验证中的近端轨迹算法。
IEEE Trans Neural Netw Learn Syst. 2016 May;27(5):966-77. doi: 10.1109/TNNLS.2015.2430935.
7
Representation learning: a review and new perspectives.表示学习:综述与新视角。
IEEE Trans Pattern Anal Mach Intell. 2013 Aug;35(8):1798-828. doi: 10.1109/TPAMI.2013.50.
8
Regularization Paths for Generalized Linear Models via Coordinate Descent.基于坐标下降法的广义线性模型正则化路径
J Stat Softw. 2010;33(1):1-22.
9
Tri-state median filter for image denoising.三态中值滤波器在图像去噪中的应用。
IEEE Trans Image Process. 1999;8(12):1834-8. doi: 10.1109/83.806630.