文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

人工智能在医疗保健中的可解释性:多学科视角。

Explainability for artificial intelligence in healthcare: a multidisciplinary perspective.

机构信息

Health Ethics and Policy Lab, Department of Health Sciences and Technology, ETH Zurich, Hottingerstrasse 10, 8092, Zurich, Switzerland.

Charité Lab for Artificial Intelligence in Medicine-CLAIM, Charité - Universitätsmedizin Berlin, Berlin, Germany.

出版信息

BMC Med Inform Decis Mak. 2020 Nov 30;20(1):310. doi: 10.1186/s12911-020-01332-6.


DOI:10.1186/s12911-020-01332-6
PMID:33256715
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7706019/
Abstract

BACKGROUND: Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue, instead it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice. METHODS: Taking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using the "Principles of Biomedical Ethics" by Beauchamp and Childress (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI. RESULTS: Each of the domains highlights a different set of core considerations and values that are relevant for understanding the role of explainability in clinical practice. From the technological point of view, explainability has to be considered both in terms how it can be achieved and what is beneficial from a development perspective. When looking at the legal perspective we identified informed consent, certification and approval as medical devices, and liability as core touchpoints for explainability. Both the medical and patient perspectives emphasize the importance of considering the interplay between human actors and medical AI. We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health. CONCLUSIONS: To ensure that medical AI lives up to its promises, there is a need to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration moving forward.

摘要

背景:在人工智能(AI)在医疗保健中的应用方面,可解释性是讨论最多的话题之一。尽管 AI 驱动的系统在某些分析任务中已经表现出优于人类的性能,但缺乏可解释性仍然引发了批评。然而,可解释性不是一个纯粹的技术问题,它引发了一系列医疗、法律、伦理和社会问题,需要深入探讨。本文全面评估了可解释性在医疗 AI 中的作用,并对可解释性对将 AI 驱动工具采用到临床实践中的意义进行了伦理评估。

方法:以基于 AI 的临床决策支持系统为例,我们采用多学科方法从技术、法律、医疗和患者角度分析了可解释性对医疗 AI 的相关性。在对概念分析的结果进行分析的基础上,我们使用 Beauchamp 和 Childress 的《生物医学伦理原则》(自主性、善行、不伤害和正义)作为分析框架,对医疗 AI 中的可解释性需求进行了伦理评估。

结果:每个领域都突出了一组不同的核心考虑因素和价值观,这些因素和价值观对于理解可解释性在临床实践中的作用至关重要。从技术角度来看,可解释性必须从如何实现和从发展角度来看什么是有益的两个方面来考虑。从法律角度来看,我们确定了知情同意、认证和批准为医疗器械以及责任作为可解释性的核心接触点。医疗和患者的角度都强调了考虑人类行为者和医疗 AI 之间相互作用的重要性。我们得出的结论是,在临床决策支持系统中省略可解释性会对医学中的核心伦理价值观构成威胁,并可能对个人和公共健康产生不利影响。

结论:为了确保医疗 AI 能够实现其承诺,需要使开发人员、医疗保健专业人员和立法者意识到医疗 AI 中不透明算法的挑战和局限性,并促进今后的多学科合作。

相似文献

[1]
Explainability for artificial intelligence in healthcare: a multidisciplinary perspective.

BMC Med Inform Decis Mak. 2020-11-30

[2]
Proposing a Principle-Based Approach for Teaching AI Ethics in Medical Education.

JMIR Med Educ. 2024-2-9

[3]
ARTIFICIAL INTELLIGENCE IN MEDICAL PRACTICE: REGULATIVE ISSUES AND PERSPECTIVES.

Wiad Lek. 2020

[4]
Interpretability of Clinical Decision Support Systems Based on Artificial Intelligence from Technological and Medical Perspective: A Systematic Review.

J Healthc Eng. 2023

[5]
Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey.

J Med Internet Res. 2021-12-13

[6]
Defining AMIA's artificial intelligence principles.

J Am Med Inform Assoc. 2022-3-15

[7]
The ethical requirement of explainability for AI-DSS in healthcare: a systematic review of reasons.

BMC Med Ethics. 2024-10-1

[8]
Artificial intelligence in medicine: Ethical, social and legal perspectives.

Ann Acad Med Singap. 2023-12-28

[9]
Ethical and Legal Challenges of Artificial Intelligence in Nuclear Medicine.

Semin Nucl Med. 2021-3

[10]
Are current clinical studies on artificial intelligence-based medical devices comprehensive enough to support a full health technology assessment? A systematic review.

Artif Intell Med. 2023-6

引用本文的文献

[1]
Exploring AI use policies in manuscript writing in cardiology and vascular journals.

Am Heart J Plus. 2025-8-8

[2]
Personalized health monitoring using explainable AI: bridging trust in predictive healthcare.

Sci Rep. 2025-8-29

[3]
AI in Fracture Detection: A Cross-Disciplinary Analysis of Physician Acceptance Using the UTAUT Model.

Diagnostics (Basel). 2025-8-21

[4]
Analyzing Retinal Vessel Morphology in MS Using Interpretable AI on Deep Learning-Segmented IR-SLO Images.

Bioengineering (Basel). 2025-8-6

[5]
Artificial Intelligence in Primary Care: Support or Additional Burden on Physicians' Healthcare Work?-A Qualitative Study.

Clin Pract. 2025-7-25

[6]
Development and multi-cohort validation of a machine learning-based simplified frailty assessment tool for clinical risk prediction.

J Transl Med. 2025-8-15

[7]
Artificial intelligence across the cancer care continuum.

Cancer. 2025-8-15

[8]
Incorporating Uncertainty Estimation and Interpretability in Personalized Glucose Prediction Using the Temporal Fusion Transformer.

Sensors (Basel). 2025-7-26

[9]
Artificial Intelligence (AI) and Emergency Medicine: Balancing Opportunities and Challenges.

JMIR Med Inform. 2025-8-13

[10]
Trust in Medical AI: The Case of mHealth Diabetes Apps.

J Eval Clin Pract. 2025-8

本文引用的文献

[1]
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.

Nat Mach Intell. 2019-5

[2]
Normalization of a conversation tool to promote shared decision making about anticoagulation in patients with atrial fibrillation within a practical randomized trial of its effectiveness: a cross-sectional study.

Trials. 2020-5-12

[3]
Machine intelligence in healthcare-perspectives on trustworthiness, explainability, usability, and transparency.

NPJ Digit Med. 2020-3-26

[4]
Ethical considerations about artificial intelligence for prognostication in intensive care.

Intensive Care Med Exp. 2019-12-10

[5]
A hybrid machine learning approach to cerebral stroke prediction based on imbalanced medical dataset.

Artif Intell Med. 2019-10-23

[6]
On the ethics of algorithmic decision-making in healthcare.

J Med Ethics. 2020-3

[7]
Subclinical and Device-Detected Atrial Fibrillation: Pondering the Knowledge Gap: A Scientific Statement From the American Heart Association.

Circulation. 2019-11-7

[8]
Dissecting racial bias in an algorithm used to manage the health of populations.

Science. 2019-10-25

[9]
Unmasking Clever Hans predictors and assessing what machines really learn.

Nat Commun. 2019-3-11

[10]
Artificial Intelligence and Black-Box Medical Decisions: Accuracy versus Explainability.

Hastings Cent Rep. 2019-1

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索