• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

生物医学中人工智能系统的因果关系与科学解释。

Causality and scientific explanation of artificial intelligence systems in biomedicine.

作者信息

Boge Florian, Mosig Axel

机构信息

Institute for Philosophy and Political Science, Technical University Dortmund, Emil-Figge-Str. 50, 44227, Dortmund, Germany.

Bioinformatics Group, Department for Biology and Biotechnology, Ruhr-University Bochum (RUB), Gesundheitscampus 4, 44801, Bochum, NRW, Germany.

出版信息

Pflugers Arch. 2025 Apr;477(4):543-554. doi: 10.1007/s00424-024-03033-9. Epub 2024 Oct 29.

DOI:10.1007/s00424-024-03033-9
PMID:39470762
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11958387/
Abstract

With rapid advances of deep neural networks over the past decade, artificial intelligence (AI) systems are now commonplace in many applications in biomedicine. These systems often achieve high predictive accuracy in clinical studies, and increasingly in clinical practice. Yet, despite their commonly high predictive accuracy, the trustworthiness of AI systems needs to be questioned when it comes to decision-making that affects the well-being of patients or the fairness towards patients or other stakeholders affected by AI-based decisions. To address this, the field of explainable artificial intelligence, or XAI for short, has emerged, seeking to provide means by which AI-based decisions can be explained to experts, users, or other stakeholders. While it is commonly claimed that explanations of artificial intelligence (AI) establish the trustworthiness of AI-based decisions, it remains unclear what traits of explanations cause them to foster trustworthiness. Building on historical cases of scientific explanation in medicine, we here propagate our perspective that, in order to foster trustworthiness, explanations in biomedical AI should meet the criteria of being scientific explanations. To further undermine our approach, we discuss its relation to the concepts of causality and randomized intervention. In our perspective, we combine aspects from the three disciplines of biomedicine, machine learning, and philosophy. From this interdisciplinary angle, we shed light on how the explanation and trustworthiness of artificial intelligence relate to the concepts of causality and robustness. To connect our perspective with AI research practice, we review recent cases of AI-based studies in pathology and, finally, provide guidelines on how to connect AI in biomedicine with scientific explanation.

摘要

在过去十年中,随着深度神经网络的迅速发展,人工智能(AI)系统如今在生物医学的许多应用中已司空见惯。这些系统在临床研究中常常能达到很高的预测准确率,并且在临床实践中的应用也越来越多。然而,尽管它们通常具有较高的预测准确率,但在涉及影响患者福祉或对受人工智能决策影响的患者或其他利益相关者的公平性的决策时,人工智能系统的可信度仍值得质疑。为了解决这个问题,可解释人工智能领域(简称XAI)应运而生,旨在提供向专家、用户或其他利益相关者解释基于人工智能的决策的方法。虽然人们普遍认为对人工智能(AI)的解释能确立基于人工智能的决策的可信度,但尚不清楚解释的哪些特征会促使它们增强可信度。基于医学中科学解释的历史案例,我们在此阐述我们的观点,即,为了增强可信度,生物医学人工智能中的解释应符合科学解释的标准。为了进一步支持我们的方法,我们讨论了它与因果关系和随机干预概念的关系。在我们看来,我们结合了生物医学、机器学习和哲学这三个学科的方面。从这个跨学科的角度,我们阐明了人工智能的解释和可信度如何与因果关系和稳健性的概念相关联。为了将我们的观点与人工智能研究实践联系起来,我们回顾了病理学中基于人工智能的研究的近期案例,最后提供了关于如何将生物医学中的人工智能与科学解释相联系的指导方针。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ba70/11958387/c16b170997a1/424_2024_3033_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ba70/11958387/fca6876d8b66/424_2024_3033_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ba70/11958387/8beb1f144be5/424_2024_3033_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ba70/11958387/c16b170997a1/424_2024_3033_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ba70/11958387/fca6876d8b66/424_2024_3033_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ba70/11958387/8beb1f144be5/424_2024_3033_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ba70/11958387/c16b170997a1/424_2024_3033_Fig3_HTML.jpg

相似文献

1
Causality and scientific explanation of artificial intelligence systems in biomedicine.生物医学中人工智能系统的因果关系与科学解释。
Pflugers Arch. 2025 Apr;477(4):543-554. doi: 10.1007/s00424-024-03033-9. Epub 2024 Oct 29.
2
How Explainable Artificial Intelligence Can Increase or Decrease Clinicians' Trust in AI Applications in Health Care: Systematic Review.可解释人工智能如何增加或降低临床医生对医疗保健中人工智能应用的信任:系统评价
JMIR AI. 2024 Oct 30;3:e53207. doi: 10.2196/53207.
3
Beginnings of Artificial Intelligence in Medicine (AIM): Computational Artifice Assisting Scientific Inquiry and Clinical Art - with Reflections on Present AIM Challenges.医学人工智能的起源(AIM):辅助科学探究与临床实践的计算手段——兼论当前AIM面临的挑战
Yearb Med Inform. 2019 Aug;28(1):249-256. doi: 10.1055/s-0039-1677895. Epub 2019 Apr 25.
4
ExAID: A multimodal explanation framework for computer-aided diagnosis of skin lesions.EXAID:一种用于皮肤损伤计算机辅助诊断的多模态解释框架。
Comput Methods Programs Biomed. 2022 Mar;215:106620. doi: 10.1016/j.cmpb.2022.106620. Epub 2022 Jan 5.
5
A historical perspective of biomedical explainable AI research.生物医学可解释人工智能研究的历史视角。
Patterns (N Y). 2023 Sep 8;4(9):100830. doi: 10.1016/j.patter.2023.100830.
6
Investigating Protective and Risk Factors and Predictive Insights for Aboriginal Perinatal Mental Health: Explainable Artificial Intelligence Approach.探究原住民围产期心理健康的保护因素、风险因素及预测性见解:可解释人工智能方法
J Med Internet Res. 2025 Apr 30;27:e68030. doi: 10.2196/68030.
7
Artificial intelligence for breast cancer detection and its health technology assessment: A scoping review.用于乳腺癌检测的人工智能及其健康技术评估:一项范围综述。
Comput Biol Med. 2025 Jan;184:109391. doi: 10.1016/j.compbiomed.2024.109391. Epub 2024 Nov 22.
8
Applications of Explainable Artificial Intelligence in Diagnosis and Surgery.可解释人工智能在诊断与手术中的应用。
Diagnostics (Basel). 2022 Jan 19;12(2):237. doi: 10.3390/diagnostics12020237.
9
Exploring the Applications of Explainability in Wearable Data Analytics: Systematic Literature Review.探索可解释性在可穿戴数据分析中的应用:系统文献综述
J Med Internet Res. 2024 Dec 24;26:e53863. doi: 10.2196/53863.
10
Forms of explanation and understanding for neuroscience and artificial intelligence.神经科学和人工智能的解释和理解形式。
J Neurophysiol. 2021 Dec 1;126(6):1860-1874. doi: 10.1152/jn.00195.2021. Epub 2021 Oct 13.

引用本文的文献

1
Application of causal forest double machine learning (DML) approach to assess tuberculosis preventive therapy's impact on ART adherence.应用因果森林双机器学习(DML)方法评估结核病预防性治疗对艾滋病抗病毒治疗依从性的影响。
Sci Rep. 2025 Aug 9;15(1):29130. doi: 10.1038/s41598-025-14460-8.
2
Special issue European Journal of Physiology: Artificial intelligence in the field of physiology and medicine.《欧洲生理学杂志》特刊:生理学与医学领域的人工智能
Pflugers Arch. 2025 Apr;477(4):509-512. doi: 10.1007/s00424-025-03071-x. Epub 2025 Mar 11.

本文引用的文献

1
A Framework for Interpretability in Machine Learning for Medical Imaging.医学成像机器学习中的可解释性框架。
IEEE Access. 2024;12:53277-53292. doi: 10.1109/access.2024.3387702. Epub 2024 Apr 11.
2
AI-enabled electrocardiography alert intervention and all-cause mortality: a pragmatic randomized clinical trial.人工智能心电图预警干预与全因死亡率:一项实用随机临床试验。
Nat Med. 2024 May;30(5):1461-1470. doi: 10.1038/s41591-024-02961-4. Epub 2024 Apr 29.
3
Randomised controlled trials evaluating artificial intelligence in clinical practice: a scoping review.
随机对照试验评估人工智能在临床实践中的应用:范围综述。
Lancet Digit Health. 2024 May;6(5):e367-e373. doi: 10.1016/S2589-7500(24)00047-5.
4
Artificial intelligence-supported screen reading versus standard double reading in the Mammography Screening with Artificial Intelligence trial (MASAI): a clinical safety analysis of a randomised, controlled, non-inferiority, single-blinded, screening accuracy study.人工智能支持的屏幕阅读与人工智能筛查中的标准双读(MASAI)试验:一项随机、对照、非劣效、单盲、筛查准确性研究的临床安全性分析。
Lancet Oncol. 2023 Aug;24(8):936-944. doi: 10.1016/S1470-2045(23)00298-X.
5
Generalizable biomarker prediction from cancer pathology slides with self-supervised deep learning: A retrospective multi-centric study.基于自监督深度学习的癌症病理切片的可泛化生物标志物预测:一项回顾性多中心研究。
Cell Rep Med. 2023 Apr 18;4(4):100980. doi: 10.1016/j.xcrm.2023.100980. Epub 2023 Mar 22.
6
Deep learning based tumor-stroma ratio scoring in colon cancer correlates with microscopic assessment.基于深度学习的结肠癌肿瘤-基质比例评分与显微镜评估相关。
J Pathol Inform. 2023 Jan 20;14:100191. doi: 10.1016/j.jpi.2023.100191. eCollection 2023.
7
Multistain deep learning for prediction of prognosis and therapy response in colorectal cancer.多标记深度学习预测结直肠癌的预后和治疗反应。
Nat Med. 2023 Feb;29(2):430-439. doi: 10.1038/s41591-022-02134-1. Epub 2023 Jan 9.
8
Explainable medical imaging AI needs human-centered design: guidelines and evidence from a systematic review.可解释的医学影像人工智能需要以人类为中心的设计:系统评价的指南与证据
NPJ Digit Med. 2022 Oct 19;5(1):156. doi: 10.1038/s41746-022-00699-2.
9
A framework for falsifiable explanations of machine learning models with an application in computational pathology.具有可证伪解释的机器学习模型框架及其在计算病理学中的应用。
Med Image Anal. 2022 Nov;82:102594. doi: 10.1016/j.media.2022.102594. Epub 2022 Aug 24.
10
Interpretability-Guided Inductive Bias For Deep Learning Based Medical Image.基于深度学习的医学图像可解释性引导归纳偏置
Med Image Anal. 2022 Oct;81:102551. doi: 10.1016/j.media.2022.102551. Epub 2022 Jul 22.