Chaudhary Aayush K, Pelz Jeff B
Carlson Center for Imaging Science, Rochester Institute of Technology, NY, USA.
J Eye Mov Res. 2019 Apr 5;12(6). doi: 10.16910/jemr.12.6.4.
The inability of current video-based eye trackers to reliably detect very small eye movements has led to confusion about the prevalence or even the existence of monocular microsaccades (small, rapid eye movements that occur in only one eye at a time). As current methods often rely on precisely localizing the pupil and/or corneal reflection on successive frames, current microsaccade-detection algorithms often suffer from signal artifacts and a low signal-to-noise ratio. We describe a new video-based eye tracking methodology which can reliably detect small eye movements over 0.2 degrees (12 arcmins) with very high confidence. Our method tracks the motion of iris features to estimate velocity rather than position, yielding a better record of microsaccades. We provide a more robust, detailed record of miniature eye movements by relying on more stable, higher-order features (such as local features of iris texture) instead of lower-order features (such as pupil center and corneal reflection), which are sensitive to noise and drift.
当前基于视频的眼动追踪器无法可靠地检测非常小的眼动,这导致了关于单眼微扫视(一次仅在一只眼睛中发生的小而快速的眼动)的发生率甚至其存在与否的困惑。由于当前方法通常依赖于在连续帧上精确地定位瞳孔和/或角膜反射,当前的微扫视检测算法经常受到信号伪影和低信噪比的影响。我们描述了一种新的基于视频的眼动追踪方法,该方法可以非常高的置信度可靠地检测超过0.2度(12角分)的小眼动。我们的方法通过跟踪虹膜特征的运动来估计速度而非位置,从而更好地记录微扫视。我们依靠更稳定的高阶特征(如虹膜纹理的局部特征)而非对噪声和漂移敏感的低阶特征(如瞳孔中心和角膜反射),提供了更稳健、详细的微小眼动记录。