Teng Clare, Sharma Harshita, Drukker Lior, Papageorghiou Aris T, Alison Noble J
Institute of Biomedical Engineering, University of Oxford, Oxford, UK.
Nuffield Department of Women's & Reproductive Health, University of Oxford, Oxford, UK.
Simpl Med Ultrasound (2021). 2021 Sep 27;12967:129-138. doi: 10.1007/978-3-030-87583-1_13. Epub 2021 Sep 21.
We present a method for classifying tasks in fetal ultrasound scans using the eye-tracking data of sonographers. The visual attention of a sonographer captured by eye-tracking data over time is defined by a scanpath. In routine fetal ultrasound, the captured standard imaging planes are visually inconsistent due to fetal position, movements, and sonographer scanning experience. To address this challenge, we propose a scale and position invariant task classification method using normalised visual scanpaths. We describe a normalisation method that uses bounding boxes to provide the gaze with a reference to the position and scale of the imaging plane and use the normalised scanpath sequences to train machine learning models for discriminating between ultrasound tasks. We compare the proposed method to existing work considering raw eyetracking data. The best performing model achieves the F1-score of 84% and outperforms existing models.
我们提出了一种利用超声检查医师的眼动追踪数据对胎儿超声扫描中的任务进行分类的方法。超声检查医师随时间被眼动追踪数据捕捉到的视觉注意力由扫描路径定义。在常规胎儿超声检查中,由于胎儿位置、运动以及超声检查医师的扫描经验,所获取的标准成像平面在视觉上并不一致。为应对这一挑战,我们提出一种使用归一化视觉扫描路径的尺度和位置不变任务分类方法。我们描述了一种归一化方法,该方法使用边界框为注视提供成像平面的位置和尺度参考,并使用归一化的扫描路径序列来训练用于区分超声任务的机器学习模型。我们将所提出的方法与考虑原始眼动追踪数据的现有工作进行比较。表现最佳的模型实现了84%的F1分数,并且优于现有模型。