• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

可解释人工智能(XAI)在电子健康记录研究中的应用:一项范围综述。

The application of explainable artificial intelligence (XAI) in electronic health record research: A scoping review.

作者信息

Caterson Jessica, Lewin Alexandra, Williamson Elizabeth

机构信息

Imperial College London, London, UK.

London School of Hygiene and Tropical Medicine, Bloomsbury, UK.

出版信息

Digit Health. 2024 Oct 30;10:20552076241272657. doi: 10.1177/20552076241272657. eCollection 2024 Jan-Dec.

DOI:10.1177/20552076241272657
PMID:39493635
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11528818/
Abstract

Machine Learning (ML) and Deep Learning (DL) models show potential in surpassing traditional methods including generalised linear models for healthcare predictions, particularly with large, complex datasets. However, low interpretability hinders practical implementation. To address this, Explainable Artificial Intelligence (XAI) methods are proposed, but a comprehensive evaluation of their effectiveness is currently limited. The aim of this scoping review is to critically appraise the application of XAI methods in ML/DL models using Electronic Health Record (EHR) data. In accordance with PRISMA scoping review guidelines, the study searched PUBMED and OVID/MEDLINE (including EMBASE) for publications related to tabular EHR data that employed ML/DL models with XAI. Out of 3220 identified publications, 76 were included. The selected publications published between February 2017 and June 2023, demonstrated an exponential increase over time. Extreme Gradient Boosting and Random Forest models were the most frequently used ML/DL methods, with 51 and 50 publications, respectively. Among XAI methods, Shapley Additive Explanations (SHAP) was predominant in 63 out of 76 publications, followed by partial dependence plots (PDPs) in 11 publications, and Locally Interpretable Model-Agnostic Explanations (LIME) in 8 publications. Despite the growing adoption of XAI methods, their applications varied widely and lacked critical evaluation. This review identifies the increasing use of XAI in tabular EHR research and highlights a deficiency in the reporting of methods and a lack of critical appraisal of validity and robustness. The study emphasises the need for further evaluation of XAI methods and underscores the importance of cautious implementation and interpretation in healthcare settings.

摘要

机器学习(ML)和深度学习(DL)模型在超越包括广义线性模型在内的传统医疗预测方法方面显示出潜力,特别是在处理大型复杂数据集时。然而,低可解释性阻碍了其实际应用。为了解决这一问题,人们提出了可解释人工智能(XAI)方法,但目前对其有效性的全面评估有限。本范围综述的目的是批判性地评估XAI方法在使用电子健康记录(EHR)数据的ML/DL模型中的应用。根据PRISMA范围综述指南,该研究在PUBMED和OVID/MEDLINE(包括EMBASE)中搜索了与使用带有XAI的ML/DL模型的表格EHR数据相关的出版物。在3220篇已识别的出版物中,纳入了76篇。所选出版物发表于2017年2月至2023年6月之间,数量随时间呈指数增长。极端梯度提升和随机森林模型是最常用的ML/DL方法,分别有51篇和50篇出版物。在XAI方法中,76篇出版物中有63篇主要使用了夏普利值加法解释(SHAP),11篇使用了部分依赖图(PDP),8篇使用了局部可解释模型无关解释(LIME)。尽管XAI方法的应用越来越广泛,但其应用差异很大且缺乏关键评估。本综述确定了XAI在表格EHR研究中的使用日益增加,并强调了方法报告方面的不足以及对有效性和稳健性缺乏关键评估。该研究强调需要对XAI方法进行进一步评估,并强调在医疗环境中谨慎实施和解释的重要性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/057c/11528818/4d21d42c288b/10.1177_20552076241272657-fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/057c/11528818/e007e08c7e75/10.1177_20552076241272657-fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/057c/11528818/b7a3e7f7d581/10.1177_20552076241272657-fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/057c/11528818/4d21d42c288b/10.1177_20552076241272657-fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/057c/11528818/e007e08c7e75/10.1177_20552076241272657-fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/057c/11528818/b7a3e7f7d581/10.1177_20552076241272657-fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/057c/11528818/4d21d42c288b/10.1177_20552076241272657-fig3.jpg

相似文献

1
The application of explainable artificial intelligence (XAI) in electronic health record research: A scoping review.可解释人工智能(XAI)在电子健康记录研究中的应用:一项范围综述。
Digit Health. 2024 Oct 30;10:20552076241272657. doi: 10.1177/20552076241272657. eCollection 2024 Jan-Dec.
2
Utilization of model-agnostic explainable artificial intelligence frameworks in oncology: a narrative review.模型无关可解释人工智能框架在肿瘤学中的应用:一项叙述性综述
Transl Cancer Res. 2022 Oct;11(10):3853-3868. doi: 10.21037/tcr-22-1626.
3
Exploring the Applications of Explainability in Wearable Data Analytics: Systematic Literature Review.探索可解释性在可穿戴数据分析中的应用:系统文献综述
J Med Internet Res. 2024 Dec 24;26:e53863. doi: 10.2196/53863.
4
Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data.用于印度新冠疫情数据严重程度预测和症状分析的模型无关可解释人工智能工具。
Front Artif Intell. 2023 Dec 4;6:1272506. doi: 10.3389/frai.2023.1272506. eCollection 2023.
5
How does the model make predictions? A systematic literature review on the explainability power of machine learning in healthcare.该模型如何进行预测?医疗保健领域机器学习可解释性能力的系统文献回顾。
Artif Intell Med. 2023 Sep;143:102616. doi: 10.1016/j.artmed.2023.102616. Epub 2023 Jun 24.
6
Explainable artificial intelligence models using real-world electronic health record data: a systematic scoping review.使用真实世界电子健康记录数据的可解释人工智能模型:系统范围界定综述。
J Am Med Inform Assoc. 2020 Jul 1;27(7):1173-1185. doi: 10.1093/jamia/ocaa053.
7
The enlightening role of explainable artificial intelligence in medical & healthcare domains: A systematic literature review.可解释人工智能在医疗保健领域中的启示作用:系统文献综述。
Comput Biol Med. 2023 Nov;166:107555. doi: 10.1016/j.compbiomed.2023.107555. Epub 2023 Oct 4.
8
Investigating Protective and Risk Factors and Predictive Insights for Aboriginal Perinatal Mental Health: Explainable Artificial Intelligence Approach.探究原住民围产期心理健康的保护因素、风险因素及预测性见解:可解释人工智能方法
J Med Internet Res. 2025 Apr 30;27:e68030. doi: 10.2196/68030.
9
Explainable artificial intelligence in breast cancer detection and risk prediction: A systematic scoping review.乳腺癌检测与风险预测中的可解释人工智能:一项系统综述。
Cancer Innov. 2024 Jul 3;3(5):e136. doi: 10.1002/cai2.136. eCollection 2024 Oct.
10
Systematic literature review on the application of explainable artificial intelligence in palliative care studies.关于可解释人工智能在姑息治疗研究中应用的系统文献综述。
Int J Med Inform. 2025 Aug;200:105914. doi: 10.1016/j.ijmedinf.2025.105914. Epub 2025 Apr 8.

引用本文的文献

1
Height estimation in children and adolescents using body composition big data: Machine-learning and explainable artificial intelligence approach.利用身体成分大数据进行儿童和青少年身高估计:机器学习与可解释人工智能方法
Digit Health. 2025 Mar 28;11:20552076251331879. doi: 10.1177/20552076251331879. eCollection 2025 Jan-Dec.

本文引用的文献

1
The underuse of AI in the health sector: Opportunity costs, success stories, risks and recommendations.人工智能在医疗领域的应用不足:机会成本、成功案例、风险与建议。
Health Technol (Berl). 2024;14(1):1-14. doi: 10.1007/s12553-023-00806-7. Epub 2023 Dec 12.
2
An explanatory analytics framework for early detection of chronic risk factors in pandemics.一种用于在大流行中早期检测慢性风险因素的解释性分析框架。
Healthc Anal (N Y). 2022 Nov;2:100020. doi: 10.1016/j.health.2022.100020. Epub 2022 Jan 10.
3
Towards Interpretable Multimodal Predictive Models for Early Mortality Prediction of Hemorrhagic Stroke Patients.
迈向用于出血性中风患者早期死亡率预测的可解释多模态预测模型
AMIA Jt Summits Transl Sci Proc. 2023 Jun 16;2023:128-137. eCollection 2023.
4
Development of an algorithm for finding pertussis episodes in a population-based electronic health record database.开发一种在基于人群的电子健康记录数据库中寻找百日咳发作的算法。
Hum Vaccin Immunother. 2023 Dec 31;19(1):2209455. doi: 10.1080/21645515.2023.2209455.
5
Clinically explainable machine learning models for early identification of patients at risk of hospital-acquired urinary tract infection.用于早期识别医院获得性尿路感染风险患者的临床可解释机器学习模型。
J Hosp Infect. 2024 Dec;154:112-121. doi: 10.1016/j.jhin.2023.03.017. Epub 2023 Mar 31.
6
Machine Learning Models Using Routinely Collected Clinical Data Offer Robust and Interpretable Predictions of 90-Day Unplanned Acute Care Use for Cancer Immunotherapy Patients.基于常规临床数据的机器学习模型能够对癌症免疫治疗患者 90 天内非计划性急性治疗的使用情况进行稳健且可解释的预测。
JCO Clin Cancer Inform. 2023 Mar;7:e2200123. doi: 10.1200/CCI.22.00123.
7
Toward explainable AI-empowered cognitive health assessment.迈向可解释人工智能赋能的认知健康评估。
Front Public Health. 2023 Mar 9;11:1024195. doi: 10.3389/fpubh.2023.1024195. eCollection 2023.
8
An interpretable machine learning approach for predicting 30-day readmission after stroke.一种可解释的机器学习方法,用于预测中风后 30 天的再入院率。
Int J Med Inform. 2023 Jun;174:105050. doi: 10.1016/j.ijmedinf.2023.105050. Epub 2023 Mar 21.
9
Benchmarking of Machine Learning classifiers on plasma proteomic for COVID-19 severity prediction through interpretable artificial intelligence.基于机器学习的分类器在 COVID-19 严重程度预测血浆蛋白质组学中的基准测试:通过可解释的人工智能。
Artif Intell Med. 2023 Mar;137:102490. doi: 10.1016/j.artmed.2023.102490. Epub 2023 Jan 18.
10
Machine Learning for Predicting Micro- and Macrovascular Complications in Individuals With Prediabetes or Diabetes: Retrospective Cohort Study.机器学习预测糖尿病前期或糖尿病个体的微血管和大血管并发症:回顾性队列研究。
J Med Internet Res. 2023 Feb 27;25:e42181. doi: 10.2196/42181.