• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人工智能在医学中的可归因性与可解释性。

Causability and explainability of artificial intelligence in medicine.

作者信息

Holzinger Andreas, Langs Georg, Denk Helmut, Zatloukal Kurt, Müller Heimo

机构信息

Institute for Medical Informatics, Statistics and Documentation Medical University Graz Graz Austria.

Department of Biomedical Imaging and Image-guided Therapy Computational Imaging Research Lab, Medical University of Vienna Vienna Austria.

出版信息

Wiley Interdiscip Rev Data Min Knowl Discov. 2019 Jul-Aug;9(4):e1312. doi: 10.1002/widm.1312. Epub 2019 Apr 2.

DOI:10.1002/widm.1312
PMID:32089788
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7017860/
Abstract

Explainable artificial intelligence (AI) is attracting much interest in medicine. Technically, the problem of explainability is as old as AI itself and classic AI represented comprehensible retraceable approaches. However, their weakness was in dealing with uncertainties of the real world. Through the introduction of probabilistic learning, applications became increasingly successful, but increasingly opaque. Explainable AI deals with the implementation of transparency and traceability of statistical black-box machine learning methods, particularly deep learning (DL). We argue that there is a need to go beyond explainable AI. To reach a level of we need causability. In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations. In this article, we provide some necessary definitions to discriminate between explainability and causability as well as a use-case of DL interpretation and of human explanation in histopathology. The main contribution of this article is the notion of causability, which is differentiated from explainability in that causability is a property of a person, while explainability is a property of a system This article is categorized under: Fundamental Concepts of Data and Knowledge > Human Centricity and User Interaction.

摘要

可解释人工智能(AI)在医学领域正引发广泛关注。从技术层面讲,可解释性问题与AI本身一样古老,经典AI代表着可理解的可追溯方法。然而,它们的弱点在于处理现实世界的不确定性。通过引入概率学习,应用程序越来越成功,但也越来越不透明。可解释AI致力于实现统计黑箱机器学习方法(尤其是深度学习,DL)的透明度和可追溯性。我们认为有必要超越可解释AI。为达到一个新的水平,我们需要因果性。就像可用性包含对使用质量的衡量一样,因果性包含对解释质量的衡量。在本文中,我们提供了一些必要的定义,以区分可解释性和因果性,并给出了DL解释和组织病理学中人类解释的一个用例。本文的主要贡献是因果性这一概念,它与可解释性的区别在于,因果性是人的一种属性,而可解释性是系统的一种属性。本文分类如下:数据与知识的基本概念>以人类为中心与用户交互

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/98cb/7017860/dae18a5ea91f/WIDM-9-e1312-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/98cb/7017860/77be02965098/WIDM-9-e1312-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/98cb/7017860/d291d0f6422a/WIDM-9-e1312-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/98cb/7017860/dae18a5ea91f/WIDM-9-e1312-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/98cb/7017860/77be02965098/WIDM-9-e1312-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/98cb/7017860/d291d0f6422a/WIDM-9-e1312-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/98cb/7017860/dae18a5ea91f/WIDM-9-e1312-g003.jpg

相似文献

1
Causability and explainability of artificial intelligence in medicine.人工智能在医学中的可归因性与可解释性。
Wiley Interdiscip Rev Data Min Knowl Discov. 2019 Jul-Aug;9(4):e1312. doi: 10.1002/widm.1312. Epub 2019 Apr 2.
2
Explainability and causability in digital pathology.数字病理学中的可解释性和可归因性。
J Pathol Clin Res. 2023 Jul;9(4):251-260. doi: 10.1002/cjp2.322. Epub 2023 Apr 12.
3
Explainable AI and Multi-Modal Causability in Medicine.医学中的可解释人工智能与多模态因果关系
I Com (Berl). 2021 Jan 26;19(3):171-179. doi: 10.1515/icom-2020-0024. Epub 2021 Jan 15.
4
Explainability and causability for artificial intelligence-supported medical image analysis in the context of the European In Vitro Diagnostic Regulation.人工智能支持的医学图像分析在欧洲体外诊断法规背景下的可解释性和可归因性。
N Biotechnol. 2022 Sep 25;70:67-72. doi: 10.1016/j.nbt.2022.05.002. Epub 2022 May 6.
5
Measuring the Quality of Explanations: The System Causability Scale (SCS): Comparing Human and Machine Explanations.衡量解释的质量:系统可归因性量表(SCS):比较人类和机器的解释
Kunstliche Intell (Oldenbourg). 2020;34(2):193-198. doi: 10.1007/s13218-020-00636-z. Epub 2020 Jan 21.
6
Explainable artificial intelligence in emergency medicine: an overview.急诊医学中的可解释人工智能:综述
Clin Exp Emerg Med. 2023 Dec;10(4):354-362. doi: 10.15441/ceem.23.145. Epub 2023 Nov 28.
7
The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies.可解释性在医疗保健人工智能可信性构建中的作用:术语、设计选择和评估策略的全面调查。
J Biomed Inform. 2021 Jan;113:103655. doi: 10.1016/j.jbi.2020.103655. Epub 2020 Dec 10.
8
Exploring the Applications of Explainability in Wearable Data Analytics: Systematic Literature Review.探索可解释性在可穿戴数据分析中的应用:系统文献综述
J Med Internet Res. 2024 Dec 24;26:e53863. doi: 10.2196/53863.
9
Explanatory pragmatism: a context-sensitive framework for explainable medical AI.解释性实用主义:一个用于可解释医学人工智能的上下文敏感框架。
Ethics Inf Technol. 2022;24(1):13. doi: 10.1007/s10676-022-09632-3. Epub 2022 Feb 28.
10
Explainability of deep learning models in medical video analysis: a survey.医学视频分析中深度学习模型的可解释性:一项综述。
PeerJ Comput Sci. 2023 Mar 14;9:e1253. doi: 10.7717/peerj-cs.1253. eCollection 2023.

引用本文的文献

1
Exploring knowledge gaps: A mixed-method cross-sectional study on Indian dental students' perspectives and ethical awareness on artificial intelligence in dentistry.探索知识差距:一项关于印度牙科学生对牙科人工智能的看法和伦理意识的混合方法横断面研究。
J Oral Biol Craniofac Res. 2025 Nov-Dec;15(6):1274-1278. doi: 10.1016/j.jobcr.2025.08.005. Epub 2025 Aug 15.
2
Developing multimodal cervical cancer risk assessment and prediction model based on LMIC hospital patient card sheets and histopathological images.基于低收入和中等收入国家医院患者病历表及组织病理学图像开发多模态宫颈癌风险评估与预测模型。
BMC Med Inform Decis Mak. 2025 Sep 1;25(1):322. doi: 10.1186/s12911-025-03174-6.
3

本文引用的文献

1
Unsupervised Identification of Disease Marker Candidates in Retinal OCT Imaging Data.无监督识别视网膜 OCT 成像数据中的疾病标志物候选物。
IEEE Trans Med Imaging. 2019 Apr;38(4):1037-1047. doi: 10.1109/TMI.2018.2877080. Epub 2018 Oct 22.
2
Machine Learning Methods for Histopathological Image Analysis.用于组织病理学图像分析的机器学习方法
Comput Struct Biotechnol J. 2018 Feb 9;16:34-42. doi: 10.1016/j.csbj.2018.01.001. eCollection 2018.
3
Development and Validation of a Deep Learning System for Diabetic Retinopathy and Related Eye Diseases Using Retinal Images From Multiethnic Populations With Diabetes.
Classification and predictive models using supervised machine learning: A conceptual review.
使用监督式机器学习的分类与预测模型:概念性综述
South Afr J Crit Care. 2025 May 19;41(1):e2937. doi: 10.7196/SAJCC.2025.v411.2937. eCollection 2025.
4
Smart CAR-T Nanosymbionts: archetypes and proto-models.智能嵌合抗原受体T细胞纳米共生体:原型与原始模型
Front Immunol. 2025 Aug 12;16:1635159. doi: 10.3389/fimmu.2025.1635159. eCollection 2025.
5
Advancements in Diagnosis of Neoplastic and Inflammatory Skin Diseases: Old and Emerging Approaches.肿瘤性和炎性皮肤病诊断的进展:传统与新出现的方法
Diagnostics (Basel). 2025 Aug 20;15(16):2100. doi: 10.3390/diagnostics15162100.
6
An Explainable Approach to Parkinson's Diagnosis Using the Contrastive Explanation Method-CEM.一种使用对比解释方法(CEM)进行帕金森病诊断的可解释方法。
Diagnostics (Basel). 2025 Aug 18;15(16):2069. doi: 10.3390/diagnostics15162069.
7
Deep Learning Techniques for Prostate Cancer Analysis and Detection: Survey of the State of the Art.用于前列腺癌分析与检测的深度学习技术:现状综述
J Imaging. 2025 Jul 28;11(8):254. doi: 10.3390/jimaging11080254.
8
Passive Sensing for Mental Health Monitoring Using Machine Learning With Wearables and Smartphones: Scoping Review.使用可穿戴设备和智能手机通过机器学习进行心理健康监测的被动传感:范围综述
J Med Internet Res. 2025 Aug 14;27:e77066. doi: 10.2196/77066.
9
Artificial intelligence in pharmacovigilance: a narrative review and practical experience with an expert-defined Bayesian network tool.药物警戒中的人工智能:一项叙述性综述及使用专家定义的贝叶斯网络工具的实践经验
Int J Clin Pharm. 2025 Aug;47(4):932-944. doi: 10.1007/s11096-025-01975-3. Epub 2025 Jul 30.
10
Influence of Leadership on Human-Artificial Intelligence Collaboration.领导力对人机协作的影响。
Behav Sci (Basel). 2025 Jun 27;15(7):873. doi: 10.3390/bs15070873.
使用来自多民族糖尿病患者群体的视网膜图像开发并验证用于糖尿病视网膜病变及相关眼病的深度学习系统
JAMA. 2017 Dec 12;318(22):2211-2223. doi: 10.1001/jama.2017.18152.
4
Classification of breast cancer histology images using Convolutional Neural Networks.使用卷积神经网络对乳腺癌组织学图像进行分类
PLoS One. 2017 Jun 1;12(6):e0177544. doi: 10.1371/journal.pone.0177544. eCollection 2017.
5
Dermatologist-level classification of skin cancer with deep neural networks.基于深度神经网络的皮肤癌皮肤科医生级分类。
Nature. 2017 Feb 2;542(7639):115-118. doi: 10.1038/nature21056. Epub 2017 Jan 25.
6
Building machines that learn and think like people.建造像人一样学习和思考的机器。
Behav Brain Sci. 2017 Jan;40:e253. doi: 10.1017/S0140525X16001837. Epub 2016 Nov 24.
7
Interactive machine learning for health informatics: when do we need the human-in-the-loop?健康信息学中的交互式机器学习:何时需要人工介入?
Brain Inform. 2016 Jun;3(2):119-131. doi: 10.1007/s40708-016-0042-6. Epub 2016 Mar 2.
8
Computational rationality: A converging paradigm for intelligence in brains, minds, and machines.计算理性:大脑、心智和机器智能的趋同范式。
Science. 2015 Jul 17;349(6245):273-8. doi: 10.1126/science.aac6076. Epub 2015 Jul 16.
9
Machine learning: Trends, perspectives, and prospects.机器学习:趋势、观点和展望。
Science. 2015 Jul 17;349(6245):255-60. doi: 10.1126/science.aaa8415.
10
On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation.关于通过逐层相关性传播对非线性分类器决策进行逐像素解释
PLoS One. 2015 Jul 10;10(7):e0130140. doi: 10.1371/journal.pone.0130140. eCollection 2015.