Mould Matthew S, Foster David H, Amano Kinjiro, Oakley John P
School of Electrical and Electronic Engineering, University of Manchester, Manchester, UK.
Vision Res. 2012 Mar 15;57:18-25. doi: 10.1016/j.visres.2011.12.006. Epub 2012 Jan 2.
There is no standard method for classifying eye fixations. Thresholds for speed, acceleration, duration, and stability of point of gaze have each been employed to demarcate data, but they have no commonly accepted values. Here, some general distributional properties of eye movements were used to construct a simple method for classifying fixations, without parametric assumptions or expert judgment. The method was primarily speed-based, but the required optimum speed threshold was derived automatically from individual data for each observer and stimulus with the aid of Tibshirani, Walther, and Hastie's 'gap statistic'. An optimum duration threshold, also derived automatically from individual data, was used to eliminate the effects of instrumental noise. The method was tested on data recorded from a video eye-tracker sampling at 250 frames a second while experimental observers viewed static natural scenes in over 30,000 one-second trials. The resulting classifications were compared with those by three independent expert visual classifiers, with 88-94% agreement, and also against two existing parametric methods. Robustness to instrumental noise and sampling rate were verified in separate simulations. The method was applied to the recorded data to illustrate the variation of mean fixation duration and saccade amplitude across observers and scenes.
目前尚无用于对眼睛注视进行分类的标准方法。已分别采用注视点的速度、加速度、持续时间和稳定性阈值来划分数据,但这些阈值并没有被普遍接受的值。在此,利用眼动的一些一般分布特性构建了一种用于分类注视的简单方法,无需参数假设或专家判断。该方法主要基于速度,但所需的最佳速度阈值借助蒂布希拉尼、瓦尔瑟和哈斯蒂的“间隙统计量”从每个观察者和刺激的个体数据中自动导出。同样从个体数据中自动导出的最佳持续时间阈值用于消除仪器噪声的影响。该方法在以每秒250帧的速度采样的视频眼动追踪仪记录的数据上进行了测试,实验观察者在超过30000次一秒的试验中观看静态自然场景。将所得分类结果与三位独立的专家视觉分类器的结果进行比较,一致性为88% - 94%,并与两种现有的参数方法进行了对比。在单独的模拟中验证了对仪器噪声和采样率的鲁棒性。该方法应用于记录的数据,以说明不同观察者和场景下平均注视持续时间和扫视幅度的变化。