University of Helsinki, Cognitive Science, Helsinki, 00014, Finland.
Sci Rep. 2017 Dec 18;7(1):17726. doi: 10.1038/s41598-017-17983-x.
We introduce a conceptually novel method for eye-movement signal analysis. The method is general in that it does not place severe restrictions on sampling frequency, measurement noise or subject behavior. Event identification is based on segmentation that simultaneously denoises the signal and determines event boundaries. The full gaze position time-series is segmented into an approximately optimal piecewise linear function in O(n) time. Gaze feature parameters for classification into fixations, saccades, smooth pursuits and post-saccadic oscillations are derived from human labeling in a data-driven manner. The range of oculomotor events identified and the powerful denoising performance make the method useable for both low-noise controlled laboratory settings and high-noise complex field experiments. This is desirable for harmonizing the gaze behavior (in the wild) and oculomotor event identification (in the laboratory) approaches to eye movement behavior. Denoising and classification performance are assessed using multiple datasets. Full open source implementation is included.
我们提出了一种新颖的眼动信号分析方法。该方法具有通用性,不会对采样频率、测量噪声或被试行为施加严格的限制。事件识别基于分段,该方法可以同时对信号进行去噪并确定事件边界。在 O(n)的时间内,将完整的注视位置时间序列分段为一个近似最优的分段线性函数。通过数据驱动的方式,从人类标记中推导出用于将注视、扫视、平滑追踪和扫视后振荡分类为固定、扫视、平滑追踪和扫视后振荡的注视特征参数。所识别的眼动事件的范围和强大的去噪性能使得该方法既适用于低噪声的受控实验室环境,也适用于高噪声的复杂现场实验。这对于协调眼动行为(在自然状态下)和眼动事件识别(在实验室中)方法是可取的。使用多个数据集评估去噪和分类性能。包含完整的开源实现。