Lim Jia Zheng, Mountstephens James, Teo Jason
Evolutionary Computing Laboratory, Faculty of Computing and Informatics, Universiti Malaysia Sabah, Kota Kinabalu, Malaysia.
Faculty of Computing and Informatics, Universiti Malaysia Sabah, Kota Kinabalu, Malaysia.
Front Neurorobot. 2022 Feb 1;15:796895. doi: 10.3389/fnbot.2021.796895. eCollection 2021.
Eye tracking is a technology to measure and determine the eye movements and eye positions of an individual. The eye data can be collected and recorded using an eye tracker. Eye-tracking data offer unprecedented insights into human actions and environments, digitizing how people communicate with computers, and providing novel opportunities to conduct passive biometric-based classification such as emotion prediction. The objective of this article is to review what specific machine learning features can be obtained from eye-tracking data for the classification task.
We performed a systematic literature review (SLR) covering the eye-tracking studies in classification published from 2016 to the present. In the search process, we used four independent electronic databases which were the IEEE Xplore, the ACM Digital Library, and the ScienceDirect repositories as well as the Google Scholar. The selection process was performed by using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) search strategy. We followed the processes indicated in the PRISMA to choose the appropriate relevant articles.
Out of the initial 420 articles that were returned from our initial search query, 37 articles were finally identified and used in the qualitative synthesis, which were deemed to be directly relevant to our research question based on our methodology.
The features that could be extracted from eye-tracking data included pupil size, saccade, fixations, velocity, blink, pupil position, electrooculogram (EOG), and gaze point. Fixation was the most commonly used feature among the studies found.
眼动追踪是一种测量和确定个体眼动和眼位的技术。可以使用眼动仪收集和记录眼部数据。眼动追踪数据为人类行为和环境提供了前所未有的见解,将人们与计算机的交互方式数字化,并为基于被动生物特征的分类(如情绪预测)提供了新的机会。本文的目的是回顾从眼动追踪数据中可以获得哪些用于分类任务的特定机器学习特征。
我们进行了一项系统的文献综述(SLR),涵盖了2016年至今发表的关于分类的眼动追踪研究。在搜索过程中,我们使用了四个独立的电子数据库,即IEEE Xplore、ACM数字图书馆、ScienceDirect存储库以及谷歌学术。选择过程采用系统评价和Meta分析的首选报告项目(PRISMA)搜索策略。我们按照PRISMA中指出的流程选择合适的相关文章。
在我们最初的搜索查询返回的420篇文章中,最终确定了37篇文章并用于定性综合分析,根据我们的方法,这些文章被认为与我们的研究问题直接相关。
可以从眼动追踪数据中提取的特征包括瞳孔大小、扫视、注视、速度、眨眼、瞳孔位置、眼电图(EOG)和注视点。在已发现的研究中,注视是最常用的特征。