Rahmani Keyvan, Thapa Rahul, Tsou Peiling, Chetty Satish Casie, Barnes Gina, Lam Carson, Tso Chak Foon
Dascena, Inc., 12333 Sowden Rd Ste B PMB 65148, Houston, Texas 77080-2059.
medRxiv. 2022 Jun 7:2022.06.06.22276062. doi: 10.1101/2022.06.06.22276062.
Data drift can negatively impact the performance of machine learning algorithms (MLAs) that were trained on historical data. As such, MLAs should be continuously monitored and tuned to overcome the systematic changes that occur in the distribution of data. In this paper, we study the extent of data drift and provide insights about its characteristics for sepsis onset prediction. This study will help elucidate the nature of data drift for prediction of sepsis and similar diseases. This may aid with the development of more effective patient monitoring systems that can stratify risk for dynamic disease states in hospitals.
We devise a series of simulations that measure the effects of data drift in patients with sepsis. We simulate multiple scenarios in which data drift may occur, namely the change in the distribution of the predictor variables (covariate shift), the change in the statistical relationship between the predictors and the target (concept shift), and the occurrence of a major healthcare event (major event) such as the COVID-19 pandemic. We measure the impact of data drift on model performances, identify the circumstances that necessitate model retraining, and compare the effects of different retraining methodologies and model architecture on the outcomes. We present the results for two different MLAs, eXtreme Gradient Boosting (XGB) and Recurrent Neural Network (RNN).
Our results show that the properly retrained XGB models outperform the baseline models in all simulation scenarios, hence signifying the existence of data drift. In the major event scenario, the area under the receiver operating characteristic curve (AUROC) at the end of the simulation period is 0.811 for the baseline XGB model and 0.868 for the retrained XGB model. In the covariate shift scenario, the AUROC at the end of the simulation period for the baseline and retrained XGB models is 0.853 and 0.874 respectively. In the concept shift scenario and under the mixed labeling method, the retrained XGB models perform worse than the baseline model for most simulation steps. However, under the full relabeling method, the AUROC at the end of the simulation period for the baseline and retrained XGB models is 0.852 and 0.877 respectively. The results for the RNN models were mixed, suggesting that retraining based on a fixed network architecture may be inadequate for an RNN. We also present the results in the form of other performance metrics such as the ratio of observed to expected probabilities (calibration) and the normalized rate of positive predictive values (PPV) by prevalence, referred to as lift, at a sensitivity of 0.8.
Our simulations reveal that retraining periods of a couple of months or using several thousand patients are likely to be adequate to monitor machine learning models that predict sepsis. This indicates that a machine learning system for sepsis prediction will probably need less infrastructure for performance monitoring and retraining compared to other applications in which data drift is more frequent and continuous. Our results also show that in the event of a concept shift, a full overhaul of the sepsis prediction model may be necessary because it indicates a discrete change in the definition of sepsis labels, and mixing the labels for the sake of incremental training may not produce the desired results.
数据漂移会对基于历史数据训练的机器学习算法(MLA)的性能产生负面影响。因此,应持续监控和调整MLA,以克服数据分布中出现的系统性变化。在本文中,我们研究了数据漂移的程度,并提供了有关其特征的见解,用于脓毒症发作预测。本研究将有助于阐明用于预测脓毒症及类似疾病的数据漂移的本质。这可能有助于开发更有效的患者监测系统,该系统可以对医院中动态疾病状态的风险进行分层。
我们设计了一系列模拟,以测量数据漂移对脓毒症患者的影响。我们模拟了可能发生数据漂移的多种场景,即预测变量分布的变化(协变量偏移)、预测变量与目标之间统计关系的变化(概念偏移)以及重大医疗事件(重大事件)的发生,如新冠疫情。我们测量数据漂移对模型性能的影响,确定需要重新训练模型的情况,并比较不同的重新训练方法和模型架构对结果的影响。我们展示了两种不同MLA的结果,即极端梯度提升(XGB)和递归神经网络(RNN)。
我们的结果表明,在所有模拟场景中,经过适当重新训练的XGB模型优于基线模型,从而表明存在数据漂移。在重大事件场景中,模拟期结束时,基线XGB模型的受试者操作特征曲线下面积(AUROC)为0.811,重新训练的XGB模型为0.868。在协变量偏移场景中,模拟期结束时,基线和重新训练的XGB模型的AUROC分别为0.853和0.874。在概念偏移场景和混合标记方法下,在大多数模拟步骤中,重新训练的XGB模型的表现比基线模型差。然而,在完全重新标记方法下,模拟期结束时,基线和重新训练的XGB模型的AUROC分别为0.852和0.877。RNN模型的结果好坏参半,这表明基于固定网络架构的重新训练可能不足以适用于RNN。我们还以其他性能指标的形式展示了结果,如观察到的概率与预期概率的比率(校准)以及在灵敏度为0.8时按患病率计算的阳性预测值(PPV)的标准化率,即提升度。
我们的模拟表明,几个月的重新训练期或使用数千名患者可能足以监测预测脓毒症的机器学习模型。这表明,与数据漂移更频繁且持续的其他应用相比,用于脓毒症预测的机器学习系统可能需要更少的基础设施来进行性能监测和重新训练。我们的结果还表明,在发生概念偏移时,可能需要对脓毒症预测模型进行全面 overhaul,因为这表明脓毒症标签的定义发生了离散变化,为了增量训练而混合标签可能不会产生预期结果。