Suppr超能文献

可解释人工智能(XAI)在电子健康记录研究中的应用:一项范围综述。

The application of explainable artificial intelligence (XAI) in electronic health record research: A scoping review.

作者信息

Caterson Jessica, Lewin Alexandra, Williamson Elizabeth

机构信息

Imperial College London, London, UK.

London School of Hygiene and Tropical Medicine, Bloomsbury, UK.

出版信息

Digit Health. 2024 Oct 30;10:20552076241272657. doi: 10.1177/20552076241272657. eCollection 2024 Jan-Dec.

Abstract

Machine Learning (ML) and Deep Learning (DL) models show potential in surpassing traditional methods including generalised linear models for healthcare predictions, particularly with large, complex datasets. However, low interpretability hinders practical implementation. To address this, Explainable Artificial Intelligence (XAI) methods are proposed, but a comprehensive evaluation of their effectiveness is currently limited. The aim of this scoping review is to critically appraise the application of XAI methods in ML/DL models using Electronic Health Record (EHR) data. In accordance with PRISMA scoping review guidelines, the study searched PUBMED and OVID/MEDLINE (including EMBASE) for publications related to tabular EHR data that employed ML/DL models with XAI. Out of 3220 identified publications, 76 were included. The selected publications published between February 2017 and June 2023, demonstrated an exponential increase over time. Extreme Gradient Boosting and Random Forest models were the most frequently used ML/DL methods, with 51 and 50 publications, respectively. Among XAI methods, Shapley Additive Explanations (SHAP) was predominant in 63 out of 76 publications, followed by partial dependence plots (PDPs) in 11 publications, and Locally Interpretable Model-Agnostic Explanations (LIME) in 8 publications. Despite the growing adoption of XAI methods, their applications varied widely and lacked critical evaluation. This review identifies the increasing use of XAI in tabular EHR research and highlights a deficiency in the reporting of methods and a lack of critical appraisal of validity and robustness. The study emphasises the need for further evaluation of XAI methods and underscores the importance of cautious implementation and interpretation in healthcare settings.

摘要

机器学习(ML)和深度学习(DL)模型在超越包括广义线性模型在内的传统医疗预测方法方面显示出潜力,特别是在处理大型复杂数据集时。然而,低可解释性阻碍了其实际应用。为了解决这一问题,人们提出了可解释人工智能(XAI)方法,但目前对其有效性的全面评估有限。本范围综述的目的是批判性地评估XAI方法在使用电子健康记录(EHR)数据的ML/DL模型中的应用。根据PRISMA范围综述指南,该研究在PUBMED和OVID/MEDLINE(包括EMBASE)中搜索了与使用带有XAI的ML/DL模型的表格EHR数据相关的出版物。在3220篇已识别的出版物中,纳入了76篇。所选出版物发表于2017年2月至2023年6月之间,数量随时间呈指数增长。极端梯度提升和随机森林模型是最常用的ML/DL方法,分别有51篇和50篇出版物。在XAI方法中,76篇出版物中有63篇主要使用了夏普利值加法解释(SHAP),11篇使用了部分依赖图(PDP),8篇使用了局部可解释模型无关解释(LIME)。尽管XAI方法的应用越来越广泛,但其应用差异很大且缺乏关键评估。本综述确定了XAI在表格EHR研究中的使用日益增加,并强调了方法报告方面的不足以及对有效性和稳健性缺乏关键评估。该研究强调需要对XAI方法进行进一步评估,并强调在医疗环境中谨慎实施和解释的重要性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/057c/11528818/e007e08c7e75/10.1177_20552076241272657-fig1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验