School of Population & Global Health, University of Western Australia, Perth; Department of Computer Science & Software Engineering, University of Western Australia, Perth.
Department of Computer Science & Software Engineering, University of Western Australia, Perth.
Comput Methods Programs Biomed. 2021 Nov;212:106415. doi: 10.1016/j.cmpb.2021.106415. Epub 2021 Sep 26.
Explainable Artificial Intelligence (XAI) has been identified as a viable method for determining the importance of features when making predictions using Machine Learning (ML) models. In this study, we created models that take an individual's health information (e.g. their drug history and comorbidities) as inputs, and predict the probability that the individual will have an Acute Coronary Syndrome (ACS) adverse outcome.
Using XAI, we quantified the contribution that specific drugs had on these ACS predictions, thus creating an XAI-based technique for pharmacovigilance monitoring, using ACS as an example of the adverse outcome to detect. Individuals aged over 65 who were supplied Musculo-skeletal system (anatomical therapeutic chemical (ATC) class M) or Cardiovascular system (ATC class C) drugs between 1993 and 2009 were identified, and their drug histories, comorbidities, and other key features were extracted from linked Western Australian datasets. Multiple ML models were trained to predict if these individuals would have an ACS related adverse outcome (i.e., death or hospitalisation with a discharge diagnosis of ACS), and a variety of ML and XAI techniques were used to calculate which features - specifically which drugs - led to these predictions.
The drug dispensing features for rofecoxib and celecoxib were found to have a greater than zero contribution to ACS related adverse outcome predictions (on average), and it was found that ACS related adverse outcomes can be predicted with 72% accuracy. Furthermore, the XAI libraries LIME and SHAP were found to successfully identify both important and unimportant features, with SHAP slightly outperforming LIME.
ML models trained on linked administrative health datasets in tandem with XAI algorithms can successfully quantify feature importance, and with further development, could potentially be used as pharmacovigilance monitoring techniques.
可解释人工智能(XAI)已被确定为一种可行的方法,用于确定使用机器学习(ML)模型进行预测时特征的重要性。在这项研究中,我们创建了模型,这些模型将个体的健康信息(例如他们的药物史和合并症)作为输入,并预测个体发生急性冠状动脉综合征(ACS)不良结局的概率。
使用 XAI,我们量化了特定药物对这些 ACS 预测的贡献,从而创建了一种基于 XAI 的药物警戒监测技术,以 ACS 作为检测的不良结局示例。确定了 1993 年至 2009 年间使用肌肉骨骼系统(解剖治疗化学(ATC)类 M)或心血管系统(ATC 类 C)药物的年龄超过 65 岁的个体,并从相关的西澳大利亚数据集提取了他们的药物史、合并症和其他关键特征。训练了多个 ML 模型来预测这些个体是否会发生 ACS 相关的不良结局(即死亡或因 ACS 出院诊断而住院),并使用各种 ML 和 XAI 技术来计算导致这些预测的特征-特别是哪些药物。
发现罗非昔布和塞来昔布的药物配药特征对 ACS 相关不良结局预测有大于零的贡献(平均而言),并且发现 ACS 相关不良结局可以以 72%的准确率进行预测。此外,发现 LIME 和 SHAP 的 XAI 库成功识别了重要和不重要的特征,SHAP 略优于 LIME。
与 XAI 算法一起训练的基于链接的行政健康数据集的 ML 模型可以成功量化特征重要性,并且随着进一步的开发,它们可能被用作药物警戒监测技术。