• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

EyeNotate:基于少样本图像分类的移动眼动追踪数据交互式标注

eyeNotate: Interactive Annotation of Mobile Eye Tracking Data Based on Few-Shot Image Classification.

作者信息

Barz Michael, Bhatti Omair Shahzad, Alam Hasan Md Tusfiqur, Nguyen Duy Minh Ho, Altmeyer Kristin, Malone Sarah, Sonntag Daniel

机构信息

Interactive Machine Learning, German Research Center for Artificial Intelligence (DFKI), 66123 Saarbrücken, Germany;

Applied Artificial Intelligence, University of Oldenburg, 26129 Oldenburg, Germany.

出版信息

J Eye Mov Res. 2025 Jul 7;18(4):27. doi: 10.3390/jemr18040027. eCollection 2025 Aug.

DOI:10.3390/jemr18040027
PMID:40708803
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12286043/
Abstract

Mobile eye tracking is an important tool in psychology and human-centered interaction design for understanding how people process visual scenes and user interfaces. However, analyzing recordings from head-mounted eye trackers, which typically include an egocentric video of the scene and a gaze signal, is a time-consuming and largely manual process. To address this challenge, we develop eyeNotate, a web-based annotation tool that enables semi-automatic data annotation and learns to improve from corrective user feedback. Users can manually map fixation events to areas of interest (AOIs) in a video-editing-style interface (baseline version). Further, our tool can generate fixation-to-AOI mapping suggestions based on a few-shot image classification model (IML-support version). We conduct an expert study with trained annotators (n = 3) to compare the baseline and IML-support versions. We measure the perceived usability, annotations' validity and reliability, and efficiency during a data annotation task. We asked our participants to re-annotate data from a single individual using an existing dataset (n = 48). Further, we conducted a semi-structured interview to understand how participants used the provided IML features and assessed our design decisions. In a post hoc experiment, we investigate the performance of three image classification models in annotating data of the remaining 47 individuals.

摘要

移动眼动追踪是心理学和以用户为中心的交互设计中的一项重要工具,用于理解人们如何处理视觉场景和用户界面。然而,分析头戴式眼动追踪仪的记录(通常包括场景的自我中心视角视频和注视信号)是一个耗时且主要靠人工的过程。为应对这一挑战,我们开发了eyeNotate,这是一个基于网络的注释工具,可实现半自动数据注释,并能从用户的纠正反馈中学习改进。用户可以在视频编辑风格的界面(基线版本)中手动将注视事件映射到感兴趣区域(AOI)。此外,我们的工具可以基于少样本图像分类模型(IML支持版本)生成注视到AOI的映射建议。我们与训练有素的注释者(n = 3)进行了一项专家研究,以比较基线版本和IML支持版本。我们在数据注释任务中测量了感知可用性、注释的有效性和可靠性以及效率。我们要求参与者使用现有数据集(n = 48)对来自单个个体的数据进行重新注释。此外,我们进行了一次半结构化访谈,以了解参与者如何使用所提供的IML功能并评估我们的设计决策。在一项事后实验中,我们研究了三种图像分类模型在注释其余47个个体的数据时的性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/a4cb4dc7b5ca/jemr-18-00027-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/30ab56c9e3c3/jemr-18-00027-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/807c4483d772/jemr-18-00027-g0A2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/2b748fb19ee2/jemr-18-00027-g0A3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/f4ee0b0f08f2/jemr-18-00027-g0A4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/2f7eaa556225/jemr-18-00027-g0A5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/cc8b5f8efd54/jemr-18-00027-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/96115b40e31e/jemr-18-00027-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/b09ec85beb61/jemr-18-00027-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/7e227cfe280c/jemr-18-00027-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/ac59ba1518cf/jemr-18-00027-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/5bb7f3eb9381/jemr-18-00027-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/e68a43ab41d1/jemr-18-00027-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/dc31592ba899/jemr-18-00027-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/a4cb4dc7b5ca/jemr-18-00027-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/30ab56c9e3c3/jemr-18-00027-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/807c4483d772/jemr-18-00027-g0A2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/2b748fb19ee2/jemr-18-00027-g0A3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/f4ee0b0f08f2/jemr-18-00027-g0A4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/2f7eaa556225/jemr-18-00027-g0A5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/cc8b5f8efd54/jemr-18-00027-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/96115b40e31e/jemr-18-00027-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/b09ec85beb61/jemr-18-00027-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/7e227cfe280c/jemr-18-00027-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/ac59ba1518cf/jemr-18-00027-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/5bb7f3eb9381/jemr-18-00027-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/e68a43ab41d1/jemr-18-00027-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/dc31592ba899/jemr-18-00027-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250f/12286043/a4cb4dc7b5ca/jemr-18-00027-g009.jpg

相似文献

1
eyeNotate: Interactive Annotation of Mobile Eye Tracking Data Based on Few-Shot Image Classification.EyeNotate:基于少样本图像分类的移动眼动追踪数据交互式标注
J Eye Mov Res. 2025 Jul 7;18(4):27. doi: 10.3390/jemr18040027. eCollection 2025 Aug.
2
A New Measure of Quantified Social Health Is Associated With Levels of Discomfort, Capability, and Mental and General Health Among Patients Seeking Musculoskeletal Specialty Care.一种新的量化社会健康指标与寻求肌肉骨骼专科护理的患者的不适程度、能力以及心理和总体健康水平相关。
Clin Orthop Relat Res. 2025 Apr 1;483(4):647-663. doi: 10.1097/CORR.0000000000003394. Epub 2025 Feb 5.
3
Survivor, family and professional experiences of psychosocial interventions for sexual abuse and violence: a qualitative evidence synthesis.性虐待和暴力的心理社会干预的幸存者、家庭和专业人员的经验:定性证据综合。
Cochrane Database Syst Rev. 2022 Oct 4;10(10):CD013648. doi: 10.1002/14651858.CD013648.pub2.
4
Short-Term Memory Impairment短期记忆障碍
5
Monitoring Adverse Drug Events in Web Forums: Evaluation of a Pipeline and Use Case Study.监测网络论坛中的药物不良事件:一个管道的评估和应用案例研究。
J Med Internet Res. 2024 Jun 18;26:e46176. doi: 10.2196/46176.
6
Emergency Medical Services Streaming Enabled Evaluation In Trauma: The SEE-IT Feasibility RCT.创伤中启用紧急医疗服务流的评估:SEE-IT可行性随机对照试验
Health Soc Care Deliv Res. 2025 May 28:1-38. doi: 10.3310/EUFS2314.
7
Audit and feedback: effects on professional practice.审核与反馈:对专业实践的影响
Cochrane Database Syst Rev. 2025 Mar 25;3(3):CD000259. doi: 10.1002/14651858.CD000259.pub4.
8
Adapting Safety Plans for Autistic Adults with Involvement from the Autism Community.在自闭症群体的参与下为成年自闭症患者调整安全计划。
Autism Adulthood. 2025 May 28;7(3):293-302. doi: 10.1089/aut.2023.0124. eCollection 2025 Jun.
9
Comparison of self-administered survey questionnaire responses collected using mobile apps versus other methods.使用移动应用程序与其他方法收集的自我管理调查问卷回复的比较。
Cochrane Database Syst Rev. 2015 Jul 27;2015(7):MR000042. doi: 10.1002/14651858.MR000042.pub2.
10
Regional cerebral blood flow single photon emission computed tomography for detection of Frontotemporal dementia in people with suspected dementia.用于检测疑似痴呆患者额颞叶痴呆的局部脑血流单光子发射计算机断层扫描
Cochrane Database Syst Rev. 2015 Jun 23;2015(6):CD010896. doi: 10.1002/14651858.CD010896.pub2.

本文引用的文献

1
I-MPN: inductive message passing network for efficient human-in-the-loop annotation of mobile eye tracking data.I-MPN:用于移动眼动追踪数据高效人工参与标注的归纳消息传递网络。
Sci Rep. 2025 Apr 23;15(1):14192. doi: 10.1038/s41598-025-94593-y.
2
Deep-SAGA: a deep-learning-based system for automatic gaze annotation from eye-tracking data.深度 SAGA:一种基于深度学习的眼动追踪数据自动注视点标注系统。
Behav Res Methods. 2023 Apr;55(3):1372-1391. doi: 10.3758/s13428-022-01833-4. Epub 2022 Jun 1.
3
Augmented Reality for Presenting Real-Time Data During Students' Laboratory Work: Comparing a Head-Mounted Display With a Separate Display.
增强现实技术在学生实验工作中呈现实时数据的应用:头戴式显示器与独立显示器的比较
Front Psychol. 2022 Mar 7;13:804742. doi: 10.3389/fpsyg.2022.804742. eCollection 2022.
4
Mobile Eye-Tracking Data Analysis Using Object Detection via YOLO v4.基于 YOLO v4 的目标检测的移动眼动追踪数据分析。
Sensors (Basel). 2021 Nov 18;21(22):7668. doi: 10.3390/s21227668.
5
Automatic Visual Attention Detection for Mobile Eye Tracking Using Pre-Trained Computer Vision Models and Human Gaze.基于预训练计算机视觉模型和人眼注视的移动眼动追踪自动视觉注意力检测。
Sensors (Basel). 2021 Jun 16;21(12):4143. doi: 10.3390/s21124143.
6
Automating Areas of Interest Analysis in Mobile Eye Tracking Experiments based on Machine Learning.基于机器学习的移动眼动追踪实验中感兴趣区域分析自动化
J Eye Mov Res. 2018 Dec 10;11(6). doi: 10.16910/jemr.11.6.6.
7
Eye tracking in Educational Science: Theoretical frameworks and research agendas.教育科学中的眼动追踪:理论框架与研究议程。
J Eye Mov Res. 2017 Feb 4;10(1). doi: 10.16910/jemr.10.1.3.
8
Mask R-CNN.Mask R-CNN。
IEEE Trans Pattern Anal Mach Intell. 2020 Feb;42(2):386-397. doi: 10.1109/TPAMI.2018.2844175. Epub 2018 Jun 5.
9
Visual Analytics for Mobile Eye Tracking.移动眼动追踪的可视化分析。
IEEE Trans Vis Comput Graph. 2017 Jan;23(1):301-310. doi: 10.1109/TVCG.2016.2598695.
10
Delving into Egocentric Actions.深入探究以自我为中心的行为。
Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2015 Jun;2015:287-295. doi: 10.1109/CVPR.2015.7298625.