文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

当前医疗保健中可解释人工智能方法的虚假希望。

The false hope of current approaches to explainable artificial intelligence in health care.

机构信息

Department of Electrical Engineering and Computer Science and Institute for Medical and Evaluative Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; Vector Institute, Toronto, ON, Canada.

Australian Institute for Machine Learning, University of Adelaide, Adelaide, SA, Australia.

出版信息

Lancet Digit Health. 2021 Nov;3(11):e745-e750. doi: 10.1016/S2589-7500(21)00208-9.


DOI:10.1016/S2589-7500(21)00208-9
PMID:34711379
Abstract

The black-box nature of current artificial intelligence (AI) has caused some to question whether AI must be explainable to be used in high-stakes scenarios such as medicine. It has been argued that explainable AI will engender trust with the health-care workforce, provide transparency into the AI decision making process, and potentially mitigate various kinds of bias. In this Viewpoint, we argue that this argument represents a false hope for explainable AI and that current explainability methods are unlikely to achieve these goals for patient-level decision support. We provide an overview of current explainability techniques and highlight how various failure cases can cause problems for decision making for individual patients. In the absence of suitable explainability methods, we advocate for rigorous internal and external validation of AI models as a more direct means of achieving the goals often associated with explainability, and we caution against having explainability be a requirement for clinically deployed models.

摘要

当前人工智能(AI)的黑箱性质使得一些人质疑,在医学等高风险场景中,AI 是否必须具有可解释性。有人认为,可解释的 AI 将赢得医疗保健人员的信任,为 AI 决策过程提供透明度,并有可能减轻各种偏见。在本观点中,我们认为,这种观点代表了对可解释 AI 的一种错误期望,并且当前的可解释性方法不太可能为患者级别的决策支持实现这些目标。我们概述了当前的可解释性技术,并强调了各种失败案例如何导致个体患者决策出现问题。在缺乏合适的可解释性方法的情况下,我们提倡对 AI 模型进行严格的内部和外部验证,作为实现通常与可解释性相关的目标的更直接手段,并警告不要将可解释性作为临床部署模型的要求。

相似文献

[1]
The false hope of current approaches to explainable artificial intelligence in health care.

Lancet Digit Health. 2021-11

[2]
Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey.

J Med Internet Res. 2021-12-13

[3]
The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies.

J Biomed Inform. 2021-1

[4]
The ethical requirement of explainability for AI-DSS in healthcare: a systematic review of reasons.

BMC Med Ethics. 2024-10-1

[5]
Should AI models be explainable to clinicians?

Crit Care. 2024-9-12

[6]
Explainable artificial intelligence in emergency medicine: an overview.

Clin Exp Emerg Med. 2023-12

[7]
Medical Informatics in a Tension Between Black-Box AI and Trust.

Stud Health Technol Inform. 2022-1-14

[8]
Causability and explainability of artificial intelligence in medicine.

Wiley Interdiscip Rev Data Min Knowl Discov. 2019

[9]
A review of explainable and interpretable AI with applications in COVID-19 imaging.

Med Phys. 2022-1

[10]
A mental models approach for defining explainable artificial intelligence.

BMC Med Inform Decis Mak. 2021-12-9

引用本文的文献

[1]
Explainable AI in medicine: challenges of integrating XAI into the future clinical routine.

Front Radiol. 2025-8-5

[2]
Genomic Characterization of Lung Cancer in Never-Smokers Using Deep Learning.

bioRxiv. 2025-8-20

[3]
Detecting papilloedema as a marker of raised intracranial pressure using artificial intelligence: A systematic review.

PLOS Digit Health. 2025-9-2

[4]
Beyond black boxes: using explainable causal artificial intelligence to separate signal from noise in pharmacovigilance.

Int J Clin Pharm. 2025-9-1

[5]
Patient-centered AI.

Front Digit Health. 2025-8-13

[6]
Beyond Post hoc Explanations: A Comprehensive Framework for Accountable AI in Medical Imaging Through Transparency, Interpretability, and Explainability.

Bioengineering (Basel). 2025-8-15

[7]
The algorithmic consultant: a new era of clinical AI calls for a new workforce of physician-algorithm specialists.

NPJ Digit Med. 2025-8-27

[8]
Artificial Intelligence in Primary Care: Support or Additional Burden on Physicians' Healthcare Work?-A Qualitative Study.

Clin Pract. 2025-7-25

[9]
Transforming sepsis management: AI-driven innovations in early detection and tailored therapies.

Crit Care. 2025-8-19

[10]
Advancements in Sensor Technology for Monitoring and Management of Chronic Coronary Syndrome.

Sensors (Basel). 2025-7-24

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索