Singh Tarkeshwar, Perry Christopher M, Herter Troy M
Department of Exercise Science, Arnold School of Public Health, University of South Carolina, 921 Assembly Street, Columbia, SC-29208, USA.
J Neuroeng Rehabil. 2016 Jan 26;13:10. doi: 10.1186/s12984-015-0107-4.
Robotic and virtual-reality systems offer tremendous potential for improving assessment and rehabilitation of neurological disorders affecting the upper extremity. A key feature of these systems is that visual stimuli are often presented within the same workspace as the hands (i.e., peripersonal space). Integrating video-based remote eye tracking with robotic and virtual-reality systems can provide an additional tool for investigating how cognitive processes influence visuomotor learning and rehabilitation of the upper extremity. However, remote eye tracking systems typically compute ocular kinematics by assuming eye movements are made in a plane with constant depth (e.g. frontal plane). When visual stimuli are presented at variable depths (e.g. transverse plane), eye movements have a vergence component that may influence reliable detection of gaze events (fixations, smooth pursuits and saccades). To our knowledge, there are no available methods to classify gaze events in the transverse plane for monocular remote eye tracking systems. Here we present a geometrical method to compute ocular kinematics from a monocular remote eye tracking system when visual stimuli are presented in the transverse plane. We then use the obtained kinematics to compute velocity-based thresholds that allow us to accurately identify onsets and offsets of fixations, saccades and smooth pursuits. Finally, we validate our algorithm by comparing the gaze events computed by the algorithm with those obtained from the eye-tracking software and manual digitization.
Within the transverse plane, our algorithm reliably differentiates saccades from fixations (static visual stimuli) and smooth pursuits from saccades and fixations when visual stimuli are dynamic.
The proposed methods provide advancements for examining eye movements in robotic and virtual-reality systems. Our methods can also be used with other video-based or tablet-based systems in which eye movements are performed in a peripersonal plane with variable depth.
机器人和虚拟现实系统在改善影响上肢的神经系统疾病的评估和康复方面具有巨大潜力。这些系统的一个关键特征是视觉刺激通常在与手部相同的工作空间内呈现(即个人周边空间)。将基于视频的远程眼动追踪与机器人和虚拟现实系统相结合,可以为研究认知过程如何影响上肢的视觉运动学习和康复提供一个额外的工具。然而,远程眼动追踪系统通常通过假设眼球运动是在一个深度恒定的平面(如额平面)上进行来计算眼球运动学。当视觉刺激在可变深度呈现时(如横向平面),眼球运动具有聚散分量,这可能会影响注视事件(注视、平稳追踪和扫视)的可靠检测。据我们所知,对于单目远程眼动追踪系统,尚无在横向平面上对注视事件进行分类的可用方法。在此,我们提出一种几何方法,用于在视觉刺激在横向平面呈现时,从单目远程眼动追踪系统计算眼球运动学。然后,我们使用获得的运动学来计算基于速度的阈值,这使我们能够准确识别注视、扫视和平稳追踪的起始和结束。最后,我们通过将算法计算的注视事件与从眼动追踪软件和手动数字化获得的注视事件进行比较,来验证我们的算法。
在横向平面内,我们的算法能够可靠地将扫视与注视(静态视觉刺激)区分开来,并且在视觉刺激为动态时,能将平稳追踪与扫视和注视区分开来。
所提出的方法为研究机器人和虚拟现实系统中的眼球运动提供了进展。我们的方法还可用于其他基于视频或平板电脑的系统,在这些系统中,眼球运动是在具有可变深度的个人周边平面内进行的。