• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

什么是可解释性?

What is Interpretability?

作者信息

Erasmus Adrian, Brunet Tyler D P, Fisher Eyal

机构信息

Institute for the Future of Knowledge, University of Johannesburg, Johannesburg, South Africa.

Department of History and Philosophy of Science, University of Cambridge, Free School Ln., Cambridge, CB2 3RH UK.

出版信息

Philos Technol. 2021;34(4):833-862. doi: 10.1007/s13347-020-00435-2. Epub 2020 Nov 12.

DOI:10.1007/s13347-020-00435-2
PMID:34966640
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8654716/
Abstract

We argue that artificial networks are explainable and offer a novel theory of interpretability. Two sets of conceptual questions are prominent in theoretical engagements with artificial neural networks, especially in the context of medical artificial intelligence: (1) Are networks , and if so, what does it mean to explain the output of a network? And (2) what does it mean for a network to be ? We argue that accounts of "explanation" tailored specifically to neural networks have ineffectively reinvented the wheel. In response to (1), we show how four familiar accounts of explanation apply to neural networks as they would to any scientific phenomenon. We diagnose the confusion about explaining neural networks within the machine learning literature as an equivocation on "explainability," "understandability" and "interpretability." To remedy this, we distinguish between these notions, and answer (2) by offering a theory and typology of interpretation in machine learning. Interpretation is something one does to an explanation with the aim of producing another, more understandable, explanation. As with explanation, there are various concepts and methods involved in interpretation: or , or , and or . Our account of "interpretability" is consistent with uses in the machine learning literature, in keeping with the philosophy of explanation and understanding, and pays special attention to medical artificial intelligence systems.

摘要

我们认为人工网络是可解释的,并提供了一种新颖的可解释性理论。在与人工神经网络的理论探讨中,尤其是在医学人工智能的背景下,有两组概念性问题尤为突出:(1)网络是否可解释,如果是,解释网络输出意味着什么?以及(2)网络具有可解释性意味着什么?我们认为专门为神经网络量身定制的“解释”说法实际上是在做无用功。针对问题(1),我们展示了四种常见的解释方式如何像应用于任何科学现象一样应用于神经网络。我们将机器学习文献中关于解释神经网络的困惑诊断为对“可解释性”“可理解性”和“可诠释性”的混淆。为了纠正这一点,我们区分了这些概念,并通过提供机器学习中的诠释理论和类型学来回答问题(2)。诠释是对一种解释进行的操作,目的是产生另一种更易于理解的解释。与解释一样,诠释涉及各种概念和方法:或,或,以及或。我们对“可诠释性”的阐述与机器学习文献中的用法一致,符合解释与理解的哲学,并特别关注医学人工智能系统。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e4e0/8654716/475326964e18/13347_2020_435_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e4e0/8654716/95861ca8a3c8/13347_2020_435_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e4e0/8654716/f038ebf5fe28/13347_2020_435_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e4e0/8654716/2ba5508baab8/13347_2020_435_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e4e0/8654716/58a2f33794ef/13347_2020_435_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e4e0/8654716/6904b011ec23/13347_2020_435_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e4e0/8654716/475326964e18/13347_2020_435_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e4e0/8654716/95861ca8a3c8/13347_2020_435_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e4e0/8654716/f038ebf5fe28/13347_2020_435_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e4e0/8654716/2ba5508baab8/13347_2020_435_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e4e0/8654716/58a2f33794ef/13347_2020_435_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e4e0/8654716/6904b011ec23/13347_2020_435_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e4e0/8654716/475326964e18/13347_2020_435_Fig6_HTML.jpg

相似文献

1
What is Interpretability?什么是可解释性?
Philos Technol. 2021;34(4):833-862. doi: 10.1007/s13347-020-00435-2. Epub 2020 Nov 12.
2
Explainable artificial intelligence for mental health through transparency and interpretability for understandability.通过透明度和可解释性实现心理健康的可解释人工智能,以提高可理解性。
NPJ Digit Med. 2023 Jan 18;6(1):6. doi: 10.1038/s41746-023-00751-9.
3
Explanatory pragmatism: a context-sensitive framework for explainable medical AI.解释性实用主义:一个用于可解释医学人工智能的上下文敏感框架。
Ethics Inf Technol. 2022;24(1):13. doi: 10.1007/s10676-022-09632-3. Epub 2022 Feb 28.
4
Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data.用于印度新冠疫情数据严重程度预测和症状分析的模型无关可解释人工智能工具。
Front Artif Intell. 2023 Dec 4;6:1272506. doi: 10.3389/frai.2023.1272506. eCollection 2023.
5
A manifesto on explainability for artificial intelligence in medicine.人工智能在医学中的可解释性宣言
Artif Intell Med. 2022 Nov;133:102423. doi: 10.1016/j.artmed.2022.102423. Epub 2022 Oct 9.
6
Explainable AI: A Review of Machine Learning Interpretability Methods.可解释人工智能:机器学习可解释性方法综述
Entropy (Basel). 2020 Dec 25;23(1):18. doi: 10.3390/e23010018.
7
AI explainability framework for environmental management research.人工智能在环境管理研究中的可解释性框架。
J Environ Manage. 2023 Sep 15;342:118149. doi: 10.1016/j.jenvman.2023.118149. Epub 2023 May 13.
8
Explainability and causability in digital pathology.数字病理学中的可解释性和可归因性。
J Pathol Clin Res. 2023 Jul;9(4):251-260. doi: 10.1002/cjp2.322. Epub 2023 Apr 12.
9
A review of explainable AI in the satellite data, deep machine learning, and human poverty domain.卫星数据、深度机器学习和人类贫困领域中可解释人工智能的综述。
Patterns (N Y). 2022 Oct 14;3(10):100600. doi: 10.1016/j.patter.2022.100600.
10
Essential properties and explanation effectiveness of explainable artificial intelligence in healthcare: A systematic review.可解释人工智能在医疗保健中的基本属性和解释效果:一项系统综述。
Heliyon. 2023 May 8;9(5):e16110. doi: 10.1016/j.heliyon.2023.e16110. eCollection 2023 May.

引用本文的文献

1
On the practical, ethical, and legal necessity of clinical Artificial Intelligence explainability: an examination of key arguments.论临床人工智能可解释性的实践、伦理和法律必要性:关键论点审视
BMC Med Inform Decis Mak. 2025 Mar 5;25(1):111. doi: 10.1186/s12911-025-02891-2.
2
High-reward, high-risk technologies? An ethical and legal account of AI development in healthcare.高回报、高风险技术?医疗保健领域人工智能开发的伦理与法律剖析
BMC Med Ethics. 2025 Jan 15;26(1):4. doi: 10.1186/s12910-024-01158-1.
3
Progress Achieved, Landmarks, and Future Concerns in Biomedical and Health Informatics.

本文引用的文献

1
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.停止为高风险决策解释黑箱机器学习模型,转而使用可解释模型。
Nat Mach Intell. 2019 May;1(5):206-215. doi: 10.1038/s42256-019-0048-x. Epub 2019 May 13.
2
Human Evaluation of Models Built for Interpretability.针对可解释性构建的模型的人工评估。
Proc AAAI Conf Hum Comput Crowdsourc. 2019;7(1):59-67. Epub 2019 Oct 28.
3
International evaluation of an AI system for breast cancer screening.国际乳腺癌筛查人工智能系统评估。
生物医学与健康信息学的进展、里程碑及未来关注点
Healthcare (Basel). 2024 Oct 15;12(20):2041. doi: 10.3390/healthcare12202041.
4
Beyond technology acceptance-a focused ethnography on the implementation, acceptance and use of new nursing technology in a German hospital.超越技术接受——关于德国一家医院新护理技术的实施、接受和使用的聚焦人种志研究
Front Digit Health. 2024 Apr 25;6:1330988. doi: 10.3389/fdgth.2024.1330988. eCollection 2024.
5
Constructing personalized characterizations of structural brain aberrations in patients with dementia using explainable artificial intelligence.使用可解释人工智能构建痴呆症患者大脑结构异常的个性化特征描述。
NPJ Digit Med. 2024 May 2;7(1):110. doi: 10.1038/s41746-024-01123-7.
6
The Reporting Quality of Machine Learning Studies on Pediatric Diabetes Mellitus: Systematic Review.机器学习在儿科糖尿病研究中的报告质量:系统评价。
J Med Internet Res. 2024 Jan 19;26:e47430. doi: 10.2196/47430.
7
Real-World and Regulatory Perspectives of Artificial Intelligence in Cardiovascular Imaging.人工智能在心血管成像中的真实世界与监管视角
Front Cardiovasc Med. 2022 Jul 22;9:890809. doi: 10.3389/fcvm.2022.890809. eCollection 2022.
8
A Functional Contextual Account of Background Knowledge in Categorization: Implications for Artificial General Intelligence and Cognitive Accounts of General Knowledge.分类中背景知识的功能情境解释:对通用人工智能和常识认知解释的启示。
Front Psychol. 2022 Mar 2;13:745306. doi: 10.3389/fpsyg.2022.745306. eCollection 2022.
9
Explanatory pragmatism: a context-sensitive framework for explainable medical AI.解释性实用主义:一个用于可解释医学人工智能的上下文敏感框架。
Ethics Inf Technol. 2022;24(1):13. doi: 10.1007/s10676-022-09632-3. Epub 2022 Feb 28.
10
Technological Answerability and the Severance Problem: Staying Connected by Demanding Answers.技术问责制与分离问题:通过要求答案保持联系。
Sci Eng Ethics. 2021 Aug 24;27(5):59. doi: 10.1007/s11948-021-00334-5.
Nature. 2020 Jan;577(7788):89-94. doi: 10.1038/s41586-019-1799-6. Epub 2020 Jan 1.
4
Dissecting racial bias in an algorithm used to manage the health of populations.剖析用于管理人群健康的算法中的种族偏见。
Science. 2019 Oct 25;366(6464):447-453. doi: 10.1126/science.aax2342.
5
Comparison of the accuracy of human readers versus machine-learning algorithms for pigmented skin lesion classification: an open, web-based, international, diagnostic study.比较人类读者和机器学习算法在色素性皮肤病变分类中的准确性:一项开放的、基于网络的、国际性的、诊断性研究。
Lancet Oncol. 2019 Jul;20(7):938-947. doi: 10.1016/S1470-2045(19)30333-X. Epub 2019 Jun 12.
6
Adversarial attacks on medical machine learning.对医学机器学习的对抗攻击。
Science. 2019 Mar 22;363(6433):1287-1289. doi: 10.1126/science.aaw4399.
7
Clinical applications of machine learning algorithms: beyond the black box.机器学习算法的临床应用:超越黑箱效应
BMJ. 2019 Mar 12;364:l886. doi: 10.1136/bmj.l886.
8
Artificial Intelligence and Black-Box Medical Decisions: Accuracy versus Explainability.人工智能与医疗决策的黑箱:准确性与可解释性
Hastings Cent Rep. 2019 Jan;49(1):15-21. doi: 10.1002/hast.973.
9
Mammographic Breast Density Assessment Using Deep Learning: Clinical Implementation.基于深度学习的乳腺 X 线摄影密度评估:临床应用。
Radiology. 2019 Jan;290(1):52-58. doi: 10.1148/radiol.2018180694. Epub 2018 Oct 16.
10
Clinically applicable deep learning for diagnosis and referral in retinal disease.临床适用的深度学习在视网膜疾病的诊断和转诊中的应用。
Nat Med. 2018 Sep;24(9):1342-1350. doi: 10.1038/s41591-018-0107-6. Epub 2018 Aug 13.