• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于 YOLO v4 的目标检测的移动眼动追踪数据分析。

Mobile Eye-Tracking Data Analysis Using Object Detection via YOLO v4.

机构信息

Physics Education Research Group, Physics Department, TU Kaiserslautern, 67663 Kaiserslautern, Germany.

Mediainformatics Group, Institute of Informatics, LMU Munich, 80337 Munich, Germany.

出版信息

Sensors (Basel). 2021 Nov 18;21(22):7668. doi: 10.3390/s21227668.

DOI:10.3390/s21227668
PMID:34833742
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8621024/
Abstract

Remote eye tracking has become an important tool for the online analysis of learning processes. Mobile eye trackers can even extend the range of opportunities (in comparison to stationary eye trackers) to real settings, such as classrooms or experimental lab courses. However, the complex and sometimes manual analysis of mobile eye-tracking data often hinders the realization of extensive studies, as this is a very time-consuming process and usually not feasible for real-world situations in which participants move or manipulate objects. In this work, we explore the opportunities to use object recognition models to assign mobile eye-tracking data for real objects during an authentic students' lab course. In a comparison of three different Convolutional Neural Networks (CNN), a Faster Region-Based-CNN, you only look once (YOLO) v3, and YOLO v4, we found that YOLO v4, together with an optical flow estimation, provides the fastest results with the highest accuracy for object detection in this setting. The automatic assignment of the gaze data to real objects simplifies the time-consuming analysis of mobile eye-tracking data and offers an opportunity for real-time system responses to the user's gaze. Additionally, we identify and discuss several problems in using object detection for mobile eye-tracking data that need to be considered.

摘要

远程眼动追踪已经成为在线分析学习过程的重要工具。与固定眼动追踪器相比,移动眼动追踪器甚至可以将机会范围扩展到真实环境中,例如教室或实验实验室课程。然而,移动眼动追踪数据的复杂且有时是手动分析常常阻碍了广泛研究的实现,因为这是一个非常耗时的过程,并且通常不适用于参与者移动或操纵物体的实际情况。在这项工作中,我们探索了使用对象识别模型在真实的学生实验室课程中为真实对象分配移动眼动追踪数据的机会。在对三种不同卷积神经网络(CNN)的比较中,我们发现 Faster Region-Based-CNN、YOLO v3 和 YOLO v4 中,YOLO v4 与光流估计相结合,在这种设置下提供了最快的结果和最高的对象检测准确性。将注视数据自动分配给真实对象简化了移动眼动追踪数据的耗时分析,并为用户注视的实时系统响应提供了机会。此外,我们还确定并讨论了在使用对象检测进行移动眼动追踪数据时需要考虑的几个问题。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d45e/8621024/3ac5ac20df4a/sensors-21-07668-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d45e/8621024/302a28b40ac9/sensors-21-07668-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d45e/8621024/b91274000d18/sensors-21-07668-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d45e/8621024/00ab728f503d/sensors-21-07668-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d45e/8621024/54461df5ee6b/sensors-21-07668-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d45e/8621024/eb16036ac01e/sensors-21-07668-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d45e/8621024/d497bbf6736b/sensors-21-07668-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d45e/8621024/ae2915f51652/sensors-21-07668-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d45e/8621024/ae1698c6d3e9/sensors-21-07668-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d45e/8621024/3ac5ac20df4a/sensors-21-07668-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d45e/8621024/302a28b40ac9/sensors-21-07668-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d45e/8621024/b91274000d18/sensors-21-07668-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d45e/8621024/00ab728f503d/sensors-21-07668-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d45e/8621024/54461df5ee6b/sensors-21-07668-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d45e/8621024/eb16036ac01e/sensors-21-07668-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d45e/8621024/d497bbf6736b/sensors-21-07668-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d45e/8621024/ae2915f51652/sensors-21-07668-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d45e/8621024/ae1698c6d3e9/sensors-21-07668-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d45e/8621024/3ac5ac20df4a/sensors-21-07668-g009.jpg

相似文献

1
Mobile Eye-Tracking Data Analysis Using Object Detection via YOLO v4.基于 YOLO v4 的目标检测的移动眼动追踪数据分析。
Sensors (Basel). 2021 Nov 18;21(22):7668. doi: 10.3390/s21227668.
2
Deep-SAGA: a deep-learning-based system for automatic gaze annotation from eye-tracking data.深度 SAGA:一种基于深度学习的眼动追踪数据自动注视点标注系统。
Behav Res Methods. 2023 Apr;55(3):1372-1391. doi: 10.3758/s13428-022-01833-4. Epub 2022 Jun 1.
3
From lab-based studies to eye-tracking in virtual and real worlds: conceptual and methodological problems and solutions. Symposium 4 at the 20th European Conference on Eye Movement Research (ECEM) in Alicante, 20.8.2019.从基于实验室的研究到虚拟和现实世界中的眼动追踪:概念与方法问题及解决方案。2019年8月20日于阿利坎特举行的第20届欧洲眼动研究会议(ECEM)上的研讨会4。
J Eye Mov Res. 2019 Nov 25;12(7). doi: 10.16910/jemr.12.7.8.
4
Design of a Scalable and Fast YOLO for Edge-Computing Devices.用于边缘计算设备的可扩展快速 YOLO 设计。
Sensors (Basel). 2020 Nov 27;20(23):6779. doi: 10.3390/s20236779.
5
Agricultural Greenhouses Detection in High-Resolution Satellite Images Based on Convolutional Neural Networks: Comparison of Faster R-CNN, YOLO v3 and SSD.基于卷积神经网络的高分辨率卫星图像中的农业温室检测:Faster R-CNN、YOLO v3和SSD的比较
Sensors (Basel). 2020 Aug 31;20(17):4938. doi: 10.3390/s20174938.
6
Training-Based Methods for Comparison of Object Detection Methods for Visual Object Tracking.基于训练的方法用于视觉目标跟踪中目标检测方法的比较。
Sensors (Basel). 2018 Nov 16;18(11):3994. doi: 10.3390/s18113994.
7
A Driver Gaze Estimation Method Based on Deep Learning.基于深度学习的驾驶员注视估计方法。
Sensors (Basel). 2022 May 23;22(10):3959. doi: 10.3390/s22103959.
8
Automating Areas of Interest Analysis in Mobile Eye Tracking Experiments based on Machine Learning.基于机器学习的移动眼动追踪实验中感兴趣区域分析自动化
J Eye Mov Res. 2018 Dec 10;11(6). doi: 10.16910/jemr.11.6.6.
9
Comparative Evaluation of Convolutional Neural Network Object Detection Algorithms for Vehicle Detection.用于车辆检测的卷积神经网络目标检测算法的比较评估
J Imaging. 2024 Jul 5;10(7):162. doi: 10.3390/jimaging10070162.
10
Automatic Visual Attention Detection for Mobile Eye Tracking Using Pre-Trained Computer Vision Models and Human Gaze.基于预训练计算机视觉模型和人眼注视的移动眼动追踪自动视觉注意力检测。
Sensors (Basel). 2021 Jun 16;21(12):4143. doi: 10.3390/s21124143.

引用本文的文献

1
eyeNotate: Interactive Annotation of Mobile Eye Tracking Data Based on Few-Shot Image Classification.EyeNotate:基于少样本图像分类的移动眼动追踪数据交互式标注
J Eye Mov Res. 2025 Jul 7;18(4):27. doi: 10.3390/jemr18040027. eCollection 2025 Aug.
2
I-MPN: inductive message passing network for efficient human-in-the-loop annotation of mobile eye tracking data.I-MPN:用于移动眼动追踪数据高效人工参与标注的归纳消息传递网络。
Sci Rep. 2025 Apr 23;15(1):14192. doi: 10.1038/s41598-025-94593-y.
3
The fundamentals of eye tracking part 3: How to choose an eye tracker.

本文引用的文献

1
Eye Tracking in Virtual Reality.虚拟现实中的眼动追踪
J Eye Mov Res. 2019 Apr 5;12(1). doi: 10.16910/jemr.12.1.3.
2
Eye tracking in Educational Science: Theoretical frameworks and research agendas.教育科学中的眼动追踪:理论框架与研究议程。
J Eye Mov Res. 2017 Feb 4;10(1). doi: 10.16910/jemr.10.1.3.
3
ARETT: Augmented Reality Eye Tracking Toolkit for Head Mounted Displays.增强现实眼动追踪工具包,用于头戴式显示器。
眼动追踪基础 第3部分:如何选择眼动仪。
Behav Res Methods. 2025 Jan 22;57(2):67. doi: 10.3758/s13428-024-02587-x.
4
Towards Automatic Object Detection and Activity Recognition in Indoor Climbing.面向室内攀岩的自动目标检测与活动识别。
Sensors (Basel). 2024 Oct 8;24(19):6479. doi: 10.3390/s24196479.
5
Quantifying Dwell Time With Location-based Augmented Reality: Dynamic AOI Analysis on Mobile Eye Tracking Data With Vision Transformer.使用基于位置的增强现实技术量化停留时间:基于视觉Transformer的移动眼动追踪数据动态感兴趣区域分析
J Eye Mov Res. 2024 Apr 29;17(3). doi: 10.16910/jemr.17.3.3. eCollection 2024.
6
MYFix: Automated Fixation Annotation of Eye-Tracking Videos.MYFix:眼动追踪视频的自动固定注释。
Sensors (Basel). 2024 Apr 23;24(9):2666. doi: 10.3390/s24092666.
7
Shifting Perspectives: A proposed framework for analyzing head-mounted eye-tracking data with dynamic areas of interest and dynamic scenes.转变视角:一个用于分析带有动态感兴趣区域和动态场景的头戴式眼动追踪数据的框架建议。
Proc Hum Factors Ergon Soc Annu Meet. 2023 Sep;67(1):953-958. doi: 10.1177/21695067231192929. Epub 2023 Oct 25.
8
Cow detection and tracking system utilizing multi-feature tracking algorithm.利用多特征跟踪算法的牛检测和跟踪系统。
Sci Rep. 2023 Oct 13;13(1):17423. doi: 10.1038/s41598-023-44669-4.
9
Experimental Study of Garlic Root Cutting Based on Deep Learning Application in Food Primary Processing.基于深度学习在食品初加工中应用的大蒜切根实验研究
Foods. 2022 Oct 20;11(20):3268. doi: 10.3390/foods11203268.
10
Platelet Detection Based on Improved YOLO_v3.基于改进YOLO_v3的血小板检测
Cyborg Bionic Syst. 2022 Sep 14;2022:9780569. doi: 10.34133/2022/9780569. eCollection 2022.
Sensors (Basel). 2021 Mar 23;21(6):2234. doi: 10.3390/s21062234.
4
GlassesViewer: Open-source software for viewing and analyzing data from the Tobii Pro Glasses 2 eye tracker.GlassesViewer:用于查看和分析 Tobii Pro Glasses 2 眼动仪数据的开源软件。
Behav Res Methods. 2020 Jun;52(3):1244-1253. doi: 10.3758/s13428-019-01314-1.
5
Multimodal Teaching Analytics: Automated Extraction of Orchestration Graphs from Wearable Sensor Data.多模态教学分析:从可穿戴传感器数据中自动提取编排图
J Comput Assist Learn. 2018 Apr;34(2):193-203. doi: 10.1111/jcal.12232. Epub 2018 Jan 24.
6
Visual Analytics for Mobile Eye Tracking.移动眼动追踪的可视化分析。
IEEE Trans Vis Comput Graph. 2017 Jan;23(1):301-310. doi: 10.1109/TVCG.2016.2598695.
7
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.更快的 R-CNN:基于区域建议网络的实时目标检测。
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149. doi: 10.1109/TPAMI.2016.2577031. Epub 2016 Jun 6.
8
Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition.空间金字塔池化在深度卷积网络中的视觉识别。
IEEE Trans Pattern Anal Mach Intell. 2015 Sep;37(9):1904-16. doi: 10.1109/TPAMI.2015.2389824.
9
Eye tracking for skills assessment and training: a systematic review.用于技能评估与训练的眼动追踪:一项系统综述
J Surg Res. 2014 Sep;191(1):169-78. doi: 10.1016/j.jss.2014.04.032. Epub 2014 Apr 24.