• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于眼睛的用户特征与状态识别——一项系统的最新技术综述

Eye-Based Recognition of User Traits and States-A Systematic State-of-the-Art Review.

作者信息

Langner Moritz, Toreini Peyman, Maedche Alexander

机构信息

Institute for Information Systems (WIN), Department of Economics and Management, Karlsruhe Institute of Technology (KIT), Kaiserstraße 89-93, 76133 Karlsruhe,

出版信息

J Eye Mov Res. 2025 Apr 1;18(2):8. doi: 10.3390/jemr18020008. eCollection 2025 Apr.

DOI:10.3390/jemr18020008
PMID:40290619
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12027520/
Abstract

Eye-tracking technology provides high-resolution information about a user's visual behavior and interests. Combined with advances in machine learning, it has become possible to recognize user traits and states using eye-tracking data. Despite increasing research interest, a comprehensive systematic review of eye-based recognition approaches has been lacking. This study aimed to fill this gap by systematically reviewing and synthesizing the existing literature on the machine-learning-based recognition of user traits and states using eye-tracking data following PRISMA 2020 guidelines. The inclusion criteria focused on studies that applied eye-tracking data to recognize user traits and states with machine learning or deep learning approaches. Searches were performed in the ACM Digital Library and IEEE Xplore and the found studies were assessed for the risk of bias using standard methodological criteria. The data synthesis included a conceptual framework that covered the task, context, technology and data processing, and recognition targets. A total of 90 studies were included that encompassed a variety of tasks (e.g., visual, driving, learning) and contexts (e.g., computer screen, simulator, wild). The recognition targets included cognitive and affective states (e.g., emotions, cognitive workload) and user traits (e.g., personality, working memory). A set of various machine learning techniques, such as Support Vector Machines (SVMs), Random Forests, and deep learning models were applied to recognize user states and traits. This review identified state-of-the-art approaches and gaps, which highlighted the need for building up best practices, larger-scale datasets, and diversifying tasks and contexts. Future research should focus on improving the ecological validity, multi-modal approaches for robust user modeling, and developing gaze-adaptive systems.

摘要

眼动追踪技术可提供有关用户视觉行为和兴趣的高分辨率信息。结合机器学习的进展,利用眼动追踪数据识别用户特征和状态已成为可能。尽管研究兴趣日益浓厚,但一直缺乏对基于眼睛的识别方法的全面系统综述。本研究旨在遵循PRISMA 2020指南,通过系统回顾和综合现有关于使用眼动追踪数据基于机器学习识别用户特征和状态的文献来填补这一空白。纳入标准侧重于那些应用眼动追踪数据通过机器学习或深度学习方法识别用户特征和状态的研究。在ACM数字图书馆和IEEE Xplore中进行了检索,并使用标准方法学标准对所发现的研究进行了偏倚风险评估。数据综合包括一个概念框架,该框架涵盖了任务、背景、技术和数据处理以及识别目标。总共纳入了90项研究,这些研究涵盖了各种任务(如视觉、驾驶、学习)和背景(如计算机屏幕、模拟器、自然环境)。识别目标包括认知和情感状态(如情绪、认知工作量)以及用户特征(如个性、工作记忆)。应用了一系列不同的机器学习技术,如支持向量机(SVM)、随机森林和深度学习模型来识别用户状态和特征。本综述确定了最先进的方法和差距,突出了建立最佳实践、更大规模数据集以及使任务和背景多样化的必要性。未来的研究应侧重于提高生态效度、用于稳健用户建模的多模态方法以及开发注视自适应系统。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b417/12027520/5e6c57914014/jemr-18-00008-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b417/12027520/e5d442767fa3/jemr-18-00008-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b417/12027520/2308532bc6f2/jemr-18-00008-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b417/12027520/5e6c57914014/jemr-18-00008-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b417/12027520/e5d442767fa3/jemr-18-00008-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b417/12027520/2308532bc6f2/jemr-18-00008-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b417/12027520/5e6c57914014/jemr-18-00008-g003.jpg

相似文献

1
Eye-Based Recognition of User Traits and States-A Systematic State-of-the-Art Review.基于眼睛的用户特征与状态识别——一项系统的最新技术综述
J Eye Mov Res. 2025 Apr 1;18(2):8. doi: 10.3390/jemr18020008. eCollection 2025 Apr.
2
Behavioral Activity Recognition Based on Gaze Ethograms.基于注视行为图谱的行为活动识别。
Int J Neural Syst. 2020 Jul;30(7):2050025. doi: 10.1142/S0129065720500252. Epub 2020 Jun 9.
3
Data-driven modeling and prediction of blood glucose dynamics: Machine learning applications in type 1 diabetes.基于数据驱动的血糖动力学建模与预测:机器学习在 1 型糖尿病中的应用。
Artif Intell Med. 2019 Jul;98:109-134. doi: 10.1016/j.artmed.2019.07.007. Epub 2019 Jul 26.
4
Machine learning based on eye-tracking data to identify Autism Spectrum Disorder: A systematic review and meta-analysis.基于眼动追踪数据的机器学习用于识别自闭症谱系障碍:一项系统综述和荟萃分析。
J Biomed Inform. 2023 Jan;137:104254. doi: 10.1016/j.jbi.2022.104254. Epub 2022 Dec 9.
5
The future of Cochrane Neonatal.考克兰新生儿协作网的未来。
Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12.
6
Eye Movements During Everyday Behavior Predict Personality Traits.日常行为中的眼动可预测人格特质。
Front Hum Neurosci. 2018 Apr 13;12:105. doi: 10.3389/fnhum.2018.00105. eCollection 2018.
7
Eye movements in real and simulated driving and navigation control - Foreword to the Special Issue.真实与模拟驾驶及导航控制中的眼球运动——特刊前言
J Eye Mov Res. 2021 Jun 3;12(3). doi: 10.16910/jemr.12.3.0.
8
Eye-Tracking Feature Extraction for Biometric Machine Learning.用于生物识别机器学习的眼动追踪特征提取
Front Neurorobot. 2022 Feb 1;15:796895. doi: 10.3389/fnbot.2021.796895. eCollection 2021.
9
Leveraging Eye Tracking to Prioritize Relevant Medical Record Data: Comparative Machine Learning Study.利用眼动追踪技术对相关病历数据进行优先级排序:比较机器学习研究
J Med Internet Res. 2020 Apr 2;22(4):e15876. doi: 10.2196/15876.
10
Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas.在流行地区,服用抗叶酸抗疟药物的人群中,叶酸补充剂与疟疾易感性和严重程度的关系。
Cochrane Database Syst Rev. 2022 Feb 1;2(2022):CD014217. doi: 10.1002/14651858.CD014217.

本文引用的文献

1
Gaze-based detection of mind wandering during audio-guided panorama viewing.基于注视的音频引导全景观看时思维漫游检测。
Sci Rep. 2024 Nov 14;14(1):27955. doi: 10.1038/s41598-024-79172-x.
2
Cognitive Load Prediction From Multimodal Physiological Signals Using Multiview Learning.基于多视图学习的多模态生理信号认知负荷预测
IEEE J Biomed Health Inform. 2025 May;29(5):3282-3292. doi: 10.1109/JBHI.2023.3346205. Epub 2025 May 6.
3
: A Multimodal Dataset for Cognitive Load Estimation.用于认知负荷估计的多模态数据集。
Sensors (Basel). 2022 Dec 28;23(1):340. doi: 10.3390/s23010340.
4
Characterizing Physiological Responses to Fear, Frustration, and Insight in Virtual Reality.虚拟现实中恐惧、挫折和顿悟的生理反应特征。
IEEE Trans Vis Comput Graph. 2022 Nov;28(11):3917-3927. doi: 10.1109/TVCG.2022.3203113. Epub 2022 Oct 21.
5
COLET: A dataset for COgnitive workLoad estimation based on eye-tracking.COLET:一个基于眼动追踪的认知负荷估计的数据集。
Comput Methods Programs Biomed. 2022 Sep;224:106989. doi: 10.1016/j.cmpb.2022.106989. Epub 2022 Jul 3.
6
Sex Difference in Emotion Recognition under Sleep Deprivation: Evidence from EEG and Eye-tracking.睡眠剥夺下的情绪识别的性别差异:来自 EEG 和眼动追踪的证据。
Annu Int Conf IEEE Eng Med Biol Soc. 2021 Nov;2021:6449-6452. doi: 10.1109/EMBC46164.2021.9630808.
7
The PRISMA 2020 statement: an updated guideline for reporting systematic reviews.PRISMA 2020 声明:系统评价报告的更新指南。
BMJ. 2021 Mar 29;372:n71. doi: 10.1136/bmj.n71.
8
PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews.PRISMA 2020 解释和说明:系统评价报告的更新指南和范例。
BMJ. 2021 Mar 29;372:n160. doi: 10.1136/bmj.n160.
9
Review of Eye Tracking Metrics Involved in Emotional and Cognitive Processes.眼动追踪指标在情感和认知过程中的研究综述。
IEEE Rev Biomed Eng. 2023;16:260-277. doi: 10.1109/RBME.2021.3066072. Epub 2023 Jan 5.
10
Eye-Tracking Analysis for Emotion Recognition.眼动追踪分析在情绪识别中的应用。
Comput Intell Neurosci. 2020 Aug 27;2020:2909267. doi: 10.1155/2020/2909267. eCollection 2020.