Shi Li, Rahman Redoan, Melamed Esther, Gwizdka Jacek, Rousseau Justin F, Ding Ying
School of Information, University of Texas at Austin, Austin, Texas, USA.
Dell Medical School, University of Texas at Austin, Austin, Texas, USA.
AMIA Jt Summits Transl Sci Proc. 2023 Jun 16;2023:477-486. eCollection 2023.
This paper applies eXplainable Artificial Intelligence (XAI) methods to investigate the socioeconomic disparities in COVID-19 patient mortality. An Extreme Gradient Boosting (XGBoost) prediction model is built based on a de-identified Austin area hospital dataset to predict the mortality of COVID-19 patients. We apply two XAI methods, Shapley Additive exPlanations (SHAP) and Locally Interpretable Model Agnostic Explanations (LIME), to compare the global and local interpretation of feature importance. This paper demonstrates the advantages of using XAI which shows the feature importance and decisive capability. Furthermore, we use the XAI methods to cross-validate their interpretations for individual patients. The XAI models reveal that Medicare financial class, older age, and gender have high impact on the mortality prediction. We find that LIME's local interpretation does not show significant differences in feature importance comparing to SHAP, which suggests pattern confirmation. This paper demonstrates the importance of XAI methods in cross-validation of feature attributions.
本文应用可解释人工智能(XAI)方法来研究新冠肺炎患者死亡率方面的社会经济差异。基于一个经过去识别处理的奥斯汀地区医院数据集构建了一个极端梯度提升(XGBoost)预测模型,以预测新冠肺炎患者的死亡率。我们应用两种XAI方法,即夏普利值附加解释(SHAP)和局部可解释模型无关解释(LIME),来比较特征重要性的全局解释和局部解释。本文展示了使用XAI的优势,它显示了特征重要性和决定性能力。此外,我们使用XAI方法对个体患者的解释进行交叉验证。XAI模型表明,医疗保险财务类别、年龄较大和性别对死亡率预测有很大影响。我们发现,与SHAP相比,LIME的局部解释在特征重要性方面没有显示出显著差异,这表明模式得到了确认。本文证明了XAI方法在特征归因交叉验证中的重要性。