文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

可解释人工智能在医疗保健中的基本属性和解释效果:一项系统综述。

Essential properties and explanation effectiveness of explainable artificial intelligence in healthcare: A systematic review.

作者信息

Jung Jinsun, Lee Hyungbok, Jung Hyunggu, Kim Hyeoneui

机构信息

College of Nursing, Seoul National University, Seoul, Republic of Korea.

Center for Human-Caring Nurse Leaders for the Future by Brain Korea 21 (BK 21) Four Project, College of Nursing, Seoul National University, Seoul, Republic of Korea.

出版信息

Heliyon. 2023 May 8;9(5):e16110. doi: 10.1016/j.heliyon.2023.e16110. eCollection 2023 May.


DOI:10.1016/j.heliyon.2023.e16110
PMID:37234618
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10205582/
Abstract

BACKGROUND: Significant advancements in the field of information technology have influenced the creation of trustworthy explainable artificial intelligence (XAI) in healthcare. Despite improved performance of XAI, XAI techniques have not yet been integrated into real-time patient care. OBJECTIVE: The aim of this systematic review is to understand the trends and gaps in research on XAI through an assessment of the essential properties of XAI and an evaluation of explanation effectiveness in the healthcare field. METHODS: A search of PubMed and Embase databases for relevant peer-reviewed articles on development of an XAI model using clinical data and evaluating explanation effectiveness published between January 1, 2011, and April 30, 2022, was conducted. All retrieved papers were screened independently by the two authors. Relevant papers were also reviewed for identification of the essential properties of XAI (e.g., stakeholders and objectives of XAI, quality of personalized explanations) and the measures of explanation effectiveness (e.g., mental model, user satisfaction, trust assessment, task performance, and correctability). RESULTS: Six out of 882 articles met the criteria for eligibility. Artificial Intelligence (AI) users were the most frequently described stakeholders. XAI served various purposes, including evaluation, justification, improvement, and learning from AI. Evaluation of the quality of personalized explanations was based on fidelity, explanatory power, interpretability, and plausibility. User satisfaction was the most frequently used measure of explanation effectiveness, followed by trust assessment, correctability, and task performance. The methods of assessing these measures also varied. CONCLUSION: XAI research should address the lack of a comprehensive and agreed-upon framework for explaining XAI and standardized approaches for evaluating the effectiveness of the explanation that XAI provides to diverse AI stakeholders.

摘要

背景:信息技术领域的重大进步影响了医疗保健领域中可信的可解释人工智能(XAI)的创建。尽管XAI的性能有所提高,但XAI技术尚未集成到实时患者护理中。 目的:本系统评价的目的是通过评估XAI的基本属性和评价医疗保健领域的解释有效性,了解XAI研究的趋势和差距。 方法:检索了PubMed和Embase数据库,查找2011年1月1日至2022年4月30日期间发表的关于使用临床数据开发XAI模型并评估解释有效性的相关同行评审文章。所有检索到的论文由两位作者独立筛选。还对相关论文进行了综述,以确定XAI的基本属性(如XAI的利益相关者和目标、个性化解释的质量)以及解释有效性的衡量标准(如心理模型、用户满意度、信任评估、任务绩效和可纠正性)。 结果:882篇文章中有6篇符合纳入标准。人工智能(AI)用户是最常被描述的利益相关者。XAI有多种用途,包括对AI的评估、辩护、改进和学习。对个性化解释质量的评估基于保真度、解释力、可解释性和合理性。用户满意度是最常用的解释有效性衡量标准,其次是信任评估、可纠正性和任务绩效。评估这些衡量标准的方法也各不相同。 结论:XAI研究应解决缺乏用于解释XAI的全面且公认的框架以及用于评估XAI向不同AI利益相关者提供的解释有效性的标准化方法的问题。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89fc/10205582/acdda389b66b/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89fc/10205582/acdda389b66b/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/89fc/10205582/acdda389b66b/gr1.jpg

相似文献

[1]
Essential properties and explanation effectiveness of explainable artificial intelligence in healthcare: A systematic review.

Heliyon. 2023-5-8

[2]
Applications of Explainable Artificial Intelligence in Diagnosis and Surgery.

Diagnostics (Basel). 2022-1-19

[3]
Guidelines and evaluation of clinical explainable AI in medical image analysis.

Med Image Anal. 2023-2

[4]
Human attention guided explainable artificial intelligence for computer vision models.

Neural Netw. 2024-9

[5]
Explainable artificial intelligence approaches for brain-computer interfaces: a review and design space.

J Neural Eng. 2024-8-8

[6]
Explainable Artificial Intelligence Methods in Combating Pandemics: A Systematic Review.

IEEE Rev Biomed Eng. 2023

[7]
Toward explainable AI (XAI) for mental health detection based on language behavior.

Front Psychiatry. 2023-12-7

[8]
Explainable artificial intelligence in breast cancer detection and risk prediction: A systematic scoping review.

Cancer Innov. 2024-7-3

[9]
Framework for Classifying Explainable Artificial Intelligence (XAI) Algorithms in Clinical Medicine.

Online J Public Health Inform. 2023-9-1

[10]
Explainable machine learning for breast cancer diagnosis from mammography and ultrasound images: a systematic review.

BMJ Health Care Inform. 2024-2-2

引用本文的文献

[1]
Prognostic Models in Heart Failure: Hope or Hype?

J Pers Med. 2025-8-1

[2]
Artificial Intelligence-Augmented Human Instruction and Surgical Simulation Performance: A Randomized Clinical Trial.

JAMA Surg. 2025-8-6

[3]
Explainable AI for Clinical Outcome Prediction: A Survey of Clinician Perceptions and Preferences.

AMIA Jt Summits Transl Sci Proc. 2025-6-10

[4]
The effectiveness, equity and explainability of health service resource allocation-with applications in kidney transplantation & family planning.

Front Health Serv. 2025-5-15

[5]
Clinical applications of artificial intelligence and machine learning in neurocardiology: a comprehensive review.

Front Cardiovasc Med. 2025-4-3

[6]
Applications of Artificial Intelligence in Nursing Care: A Systematic Review.

J Nurs Manag. 2023-7-26

[7]
Clinical validation of explainable AI for fetal growth scans through multi-level, cross-institutional prospective end-user evaluation.

Sci Rep. 2025-1-15

[8]
Healthcare workers' knowledge and attitudes regarding artificial intelligence adoption in healthcare: A cross-sectional study.

Heliyon. 2024-11-29

[9]
Human-centered evaluation of explainable AI applications: a systematic review.

Front Artif Intell. 2024-10-17

[10]
How Explainable Artificial Intelligence Can Increase or Decrease Clinicians' Trust in AI Applications in Health Care: Systematic Review.

JMIR AI. 2024-10-30

本文引用的文献

[1]
Ethical, legal, and social considerations of AI-based medical decision-support tools: A scoping review.

Int J Med Inform. 2022-5

[2]
Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond.

Inf Fusion. 2022-1

[3]
Explainable Artificial Intelligence for Bias Detection in COVID CT-Scan Classifiers.

Sensors (Basel). 2021-8-23

[4]
The use of explainable artificial intelligence to explore types of fenestral otosclerosis misdiagnosed when using temporal bone high-resolution computed tomography.

Ann Transl Med. 2021-6

[5]
Explaining a model predicting quality of surgical practice: a first presentation to and review by clinical experts.

Int J Comput Assist Radiol Surg. 2021-11

[6]
Interpretable and Lightweight 3-D Deep Learning Model for Automated ACL Diagnosis.

IEEE J Biomed Health Inform. 2021-7

[7]
Interpretable heartbeat classification using local model-agnostic explanations on ECGs.

Comput Biol Med. 2021-6

[8]
The PRISMA 2020 statement: an updated guideline for reporting systematic reviews.

Syst Rev. 2021-3-29

[9]
An explainable algorithm for detecting drug-induced QT-prolongation at risk of torsades de pointes (TdP) regardless of heart rate and T-wave morphology.

Comput Biol Med. 2021-4

[10]
A Comprehensive Explanation Framework for Biomedical Time Series Classification.

IEEE J Biomed Health Inform. 2021-7

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索