文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

可解释性在医疗保健人工智能可信性构建中的作用:术语、设计选择和评估策略的全面调查。

The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies.

机构信息

Department of Medical Informatics, Erasmus University Medical Center, Rotterdam, the Netherlands.

Department of Medical Informatics, Erasmus University Medical Center, Rotterdam, the Netherlands.

出版信息

J Biomed Inform. 2021 Jan;113:103655. doi: 10.1016/j.jbi.2020.103655. Epub 2020 Dec 10.


DOI:10.1016/j.jbi.2020.103655
PMID:33309898
Abstract

Artificial intelligence (AI) has huge potential to improve the health and well-being of people, but adoption in clinical practice is still limited. Lack of transparency is identified as one of the main barriers to implementation, as clinicians should be confident the AI system can be trusted. Explainable AI has the potential to overcome this issue and can be a step towards trustworthy AI. In this paper we review the recent literature to provide guidance to researchers and practitioners on the design of explainable AI systems for the health-care domain and contribute to formalization of the field of explainable AI. We argue the reason to demand explainability determines what should be explained as this determines the relative importance of the properties of explainability (i.e. interpretability and fidelity). Based on this, we propose a framework to guide the choice between classes of explainable AI methods (explainable modelling versus post-hoc explanation; model-based, attribution-based, or example-based explanations; global and local explanations). Furthermore, we find that quantitative evaluation metrics, which are important for objective standardized evaluation, are still lacking for some properties (e.g. clarity) and types of explanations (e.g. example-based methods). We conclude that explainable modelling can contribute to trustworthy AI, but the benefits of explainability still need to be proven in practice and complementary measures might be needed to create trustworthy AI in health care (e.g. reporting data quality, performing extensive (external) validation, and regulation).

摘要

人工智能(AI)具有极大的潜力来改善人们的健康和福祉,但在临床实践中的应用仍然有限。缺乏透明度被认为是实施的主要障碍之一,因为临床医生应该有信心相信 AI 系统是可以信赖的。可解释 AI 有可能克服这个问题,并朝着可信赖的 AI 迈进。在本文中,我们回顾了最近的文献,为研究人员和从业者提供了在医疗保健领域设计可解释 AI 系统的指导,并为可解释 AI 领域的形式化做出了贡献。我们认为,对可解释性的需求决定了应该解释什么,因为这决定了可解释性的属性(即可解释性和保真度)的相对重要性。基于此,我们提出了一个框架来指导在可解释 AI 方法的类别之间进行选择(可解释建模与事后解释;基于模型、基于归因或基于示例的解释;全局和局部解释)。此外,我们发现,对于一些属性(例如清晰度)和类型的解释(例如基于示例的方法),仍然缺乏定量评估指标,这些指标对于客观标准化评估很重要。我们的结论是,可解释建模可以为可信赖的 AI 做出贡献,但可解释性的好处仍需要在实践中得到证明,并且可能需要采取补充措施来在医疗保健中创建可信赖的 AI(例如报告数据质量、进行广泛的(外部)验证和监管)。

相似文献

[1]
The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies.

J Biomed Inform. 2021-1

[2]
Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey.

J Med Internet Res. 2021-12-13

[3]
The ethical requirement of explainability for AI-DSS in healthcare: a systematic review of reasons.

BMC Med Ethics. 2024-10-1

[4]
Explainable artificial intelligence in emergency medicine: an overview.

Clin Exp Emerg Med. 2023-12

[5]
A mental models approach for defining explainable artificial intelligence.

BMC Med Inform Decis Mak. 2021-12-9

[6]
The false hope of current approaches to explainable artificial intelligence in health care.

Lancet Digit Health. 2021-11

[7]
Towards a Knowledge Graph-Based Explainable Decision Support System in Healthcare.

Stud Health Technol Inform. 2021-5-27

[8]
Causability and explainability of artificial intelligence in medicine.

Wiley Interdiscip Rev Data Min Knowl Discov. 2019

[9]
Explanatory pragmatism: a context-sensitive framework for explainable medical AI.

Ethics Inf Technol. 2022

[10]
A manifesto on explainability for artificial intelligence in medicine.

Artif Intell Med. 2022-11

引用本文的文献

[1]
Personalized health monitoring using explainable AI: bridging trust in predictive healthcare.

Sci Rep. 2025-8-29

[2]
Artificial intelligence and machine learning in spine care: Advancing precision diagnosis, treatment, and rehabilitation.

World J Orthop. 2025-8-18

[3]
Enabling Physicians to Make an Informed Adoption Decision on Artificial Intelligence Applications in Medical Imaging Diagnostics: Qualitative Study.

J Med Internet Res. 2025-8-12

[4]
Machine Learning Algorithm to Explore Patients With Heterogeneous Treatment Effects of Clinically Significant CMV Infection and Non-Relapse Mortality After HSCT.

EJHaem. 2025-8-9

[5]
Advancing Real-World Evidence Through a Federated Health Data Network (EHDEN): Descriptive Study.

J Med Internet Res. 2025-8-7

[6]
Improving Explainability and Integrability of Medical AI to Promote Health Care Professional Acceptance and Use: Mixed Systematic Review.

J Med Internet Res. 2025-8-7

[7]
Artificial Intelligence-Augmented Human Instruction and Surgical Simulation Performance: A Randomized Clinical Trial.

JAMA Surg. 2025-8-6

[8]
Dynamic gating-enhanced deep learning model with multi-source remote sensing synergy for optimizing wheat yield estimation.

Front Plant Sci. 2025-7-21

[9]
Integration of label-free surface enhanced Raman spectroscopy (SERS) of extracellular vesicles (EVs) with Raman tagged labels to enhance ovarian cancer diagnostics.

Biosens Bioelectron. 2025-11-15

[10]
Development and Validation of a Large Language Model-Powered Chatbot for Neurosurgery: Mixed Methods Study on Enhancing Perioperative Patient Education.

J Med Internet Res. 2025-7-15

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索