Schepens Eye Research Institute of Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, 20 Staniford St, Boston, MA, 02114, USA.
Department of Mechanical Engineering, Technical University of Munich, Munich, Germany.
Behav Res Methods. 2021 Apr;53(2):487-506. doi: 10.3758/s13428-020-01427-y.
Eye and head movements are used to scan the environment when driving. In particular, when approaching an intersection, large gaze scans to the left and right, comprising head and multiple eye movements, are made. We detail an algorithm called the gaze scan algorithm that automatically quantifies the magnitude, duration, and composition of such large lateral gaze scans. The algorithm works by first detecting lateral saccades, then merging these lateral saccades into gaze scans, with the start and end points of each gaze scan marked in time and eccentricity. We evaluated the algorithm by comparing gaze scans generated by the algorithm to manually marked "consensus ground truth" gaze scans taken from gaze data collected in a high-fidelity driving simulator. We found that the gaze scan algorithm successfully marked 96% of gaze scans and produced magnitudes and durations close to ground truth. Furthermore, the differences between the algorithm and ground truth were similar to the differences found between expert coders. Therefore, the algorithm may be used in lieu of manual marking of gaze data, significantly accelerating the time-consuming marking of gaze movement data in driving simulator studies. The algorithm also complements existing eye tracking and mobility research by quantifying the number, direction, magnitude, and timing of gaze scans and can be used to better understand how individuals scan their environment.
在驾驶过程中,眼睛和头部运动会用于扫视周围环境。特别是在接近十字路口时,驾驶员会进行大范围的左右扫视,包括头部和多次眼部运动。我们详细介绍了一种名为扫视算法的算法,该算法可自动量化这种大的横向扫视的幅度、持续时间和组成。该算法通过首先检测横向眼跳,然后将这些横向眼跳合并为扫视,标记每个扫视的起点和终点的时间和离轴距离来工作。我们通过将算法生成的扫视与从高保真驾驶模拟器收集的注视数据中手动标记的“共识真实注视”扫视进行比较,评估了该算法。我们发现,扫视算法成功标记了 96%的扫视,并且产生的幅度和持续时间接近真实值。此外,算法和真实值之间的差异与专家编码员之间发现的差异相似。因此,该算法可以替代手动标记注视数据,显著加快驾驶模拟器研究中注视运动数据的耗时标记。该算法还通过量化扫视的数量、方向、幅度和时间,补充了现有的眼动追踪和移动性研究,可用于更好地了解个体如何扫视周围环境。