King Andrew J, Cooper Gregory F, Clermont Gilles, Hochheiser Harry, Hauskrecht Milos, Sittig Dean F, Visweswaran Shyam
Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, PA, United States.
Department of Critical Care Medicine, University of Pittsburgh, Pittsburgh, PA, United States.
J Med Internet Res. 2020 Apr 2;22(4):e15876. doi: 10.2196/15876.
Electronic medical record (EMR) systems capture large amounts of data per patient and present that data to physicians with little prioritization. Without prioritization, physicians must mentally identify and collate relevant data, an activity that can lead to cognitive overload. To mitigate cognitive overload, a Learning EMR (LEMR) system prioritizes the display of relevant medical record data. Relevant data are those that are pertinent to a context-defined as the combination of the user, clinical task, and patient case. To determine which data are relevant in a specific context, a LEMR system uses supervised machine learning models of physician information-seeking behavior. Since obtaining information-seeking behavior data via manual annotation is slow and expensive, automatic methods for capturing such data are needed.
The goal of the research was to propose and evaluate eye tracking as a high-throughput method to automatically acquire physician information-seeking behavior useful for training models for a LEMR system.
Critical care medicine physicians reviewed intensive care unit patient cases in an EMR interface developed for the study. Participants manually identified patient data that were relevant in the context of a clinical task: preparing a patient summary to present at morning rounds. We used eye tracking to capture each physician's gaze dwell time on each data item (eg, blood glucose measurements). Manual annotations and gaze dwell times were used to define target variables for developing supervised machine learning models of physician information-seeking behavior. We compared the performance of manual selection and gaze-derived models on an independent set of patient cases.
A total of 68 pairs of manual selection and gaze-derived machine learning models were developed from training data and evaluated on an independent evaluation data set. A paired Wilcoxon signed-rank test showed similar performance of manual selection and gaze-derived models on area under the receiver operating characteristic curve (P=.40).
We used eye tracking to automatically capture physician information-seeking behavior and used it to train models for a LEMR system. The models that were trained using eye tracking performed like models that were trained using manual annotations. These results support further development of eye tracking as a high-throughput method for training clinical decision support systems that prioritize the display of relevant medical record data.
电子病历(EMR)系统会收集每位患者的大量数据,并在几乎没有优先级区分的情况下将这些数据呈现给医生。如果没有优先级区分,医生必须在脑海中识别和整理相关数据,而这项活动可能会导致认知过载。为了减轻认知过载,学习型电子病历(LEMR)系统会对相关病历数据的显示进行优先级排序。相关数据是指与一个上下文相关的数据,该上下文被定义为用户、临床任务和患者病例的组合。为了确定在特定上下文中哪些数据是相关的,LEMR系统使用医生信息搜索行为的监督机器学习模型。由于通过人工标注获取信息搜索行为数据既缓慢又昂贵,因此需要自动获取此类数据的方法。
本研究的目标是提出并评估眼动追踪作为一种高通量方法,以自动获取对训练LEMR系统模型有用的医生信息搜索行为。
重症医学医生在为该研究开发的EMR界面中查看重症监护病房患者病例。参与者手动识别在临床任务(准备一份在早交班时展示的患者总结)背景下相关的患者数据。我们使用眼动追踪来记录每位医生在每个数据项(如血糖测量值)上的注视停留时间。人工标注和注视停留时间被用于定义目标变量,以开发医生信息搜索行为的监督机器学习模型。我们在一组独立的患者病例上比较了人工选择模型和基于注视的模型的性能。
从训练数据中总共开发了68对人工选择模型和基于注视的机器学习模型,并在一个独立的评估数据集上进行了评估。配对Wilcoxon符号秩检验显示,人工选择模型和基于注视的模型在受试者工作特征曲线下面积的表现相似(P = 0.40)。
我们使用眼动追踪自动获取医生信息搜索行为,并将其用于训练LEMR系统的模型。使用眼动追踪训练的模型与使用人工标注训练的模型表现相当。这些结果支持进一步开发眼动追踪技术,将其作为一种高通量方法来训练临床决策支持系统,该系统可对相关病历数据的显示进行优先级排序。