• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

可解释人工智能的全球分类法:统一技术科学和社会科学的术语

A global taxonomy of interpretable AI: unifying the terminology for the technical and social sciences.

作者信息

Graziani Mara, Dutkiewicz Lidia, Calvaresi Davide, Amorim José Pereira, Yordanova Katerina, Vered Mor, Nair Rahul, Abreu Pedro Henriques, Blanke Tobias, Pulignano Valeria, Prior John O, Lauwaert Lode, Reijers Wessel, Depeursinge Adrien, Andrearczyk Vincent, Müller Henning

机构信息

University of Applied Sciences of Western Switzerland (HES-SO Valais), Rue du Technopole 3, Sierre, 3960 Valais Switzerland.

Department of Computer Science, University of Geneva (UniGe), Route de Drize 7, Carouge, 1227 Vaud Switzerland.

出版信息

Artif Intell Rev. 2023;56(4):3473-3504. doi: 10.1007/s10462-022-10256-8. Epub 2022 Sep 6.

DOI:10.1007/s10462-022-10256-8
PMID:36092822
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9446618/
Abstract

Since its emergence in the 1960s, Artificial Intelligence (AI) has grown to conquer many technology products and their fields of application. Machine learning, as a major part of the current AI solutions, can learn from the data and through experience to reach high performance on various tasks. This growing success of AI algorithms has led to a need for interpretability to understand opaque models such as deep neural networks. Various requirements have been raised from different domains, together with numerous tools to debug, justify outcomes, and establish the safety, fairness and reliability of the models. This variety of tasks has led to inconsistencies in the terminology with, for instance, terms such as , and being often used interchangeably in methodology papers. These words, however, convey different meanings and are "weighted" differently across domains, for example in the technical and social sciences. In this paper, we propose an overarching terminology of interpretability of AI systems that can be referred to by the technical developers as much as by the social sciences community to pursue clarity and efficiency in the definition of regulations for ethical and reliable AI development. We show how our taxonomy and definition of interpretable AI differ from the ones in previous research and how they apply with high versatility to several domains and use cases, proposing a-highly needed-standard for the communication among interdisciplinary areas of AI.

摘要

自20世纪60年代出现以来,人工智能(AI)不断发展,征服了许多科技产品及其应用领域。机器学习作为当前人工智能解决方案的主要组成部分,可以从数据中学习并通过经验在各种任务上实现高性能。人工智能算法的这种日益成功引发了对可解释性的需求,以便理解诸如深度神经网络等不透明模型。不同领域提出了各种要求,同时也出现了众多用于调试、证明结果以及确立模型的安全性、公平性和可靠性的工具。这种多样化的任务导致了术语上的不一致,例如,在方法论论文中,诸如“可解释性”“可理解性”和“透明度”等术语经常互换使用。然而,这些词传达的含义不同,并且在不同领域(例如技术领域和社会科学领域)的“权重”也不同。在本文中,我们提出了一种人工智能系统可解释性的总体术语,技术开发者和社会科学界都可以参考,以便在道德和可靠的人工智能开发法规定义中追求清晰性和效率。我们展示了我们对可解释人工智能的分类法和定义与先前研究中的有何不同,以及它们如何高度通用地应用于多个领域和用例,为人工智能跨学科领域之间的交流提出了一个急需的标准。

相似文献

1
A global taxonomy of interpretable AI: unifying the terminology for the technical and social sciences.可解释人工智能的全球分类法:统一技术科学和社会科学的术语
Artif Intell Rev. 2023;56(4):3473-3504. doi: 10.1007/s10462-022-10256-8. Epub 2022 Sep 6.
2
Explainable AI for Bioinformatics: Methods, Tools and Applications.可解释人工智能在生物信息学中的应用:方法、工具与应用。
Brief Bioinform. 2023 Sep 20;24(5). doi: 10.1093/bib/bbad236.
3
Responsible AI for cardiovascular disease detection: Towards a privacy-preserving and interpretable model.心血管疾病检测的负责任 AI:迈向隐私保护和可解释的模型。
Comput Methods Programs Biomed. 2024 Sep;254:108289. doi: 10.1016/j.cmpb.2024.108289. Epub 2024 Jun 17.
4
The Virtues of Interpretable Medical Artificial Intelligence.可解释医学人工智能的优点
Camb Q Healthc Ethics. 2022 Dec 16:1-10. doi: 10.1017/S0963180122000305.
5
Explainable artificial intelligence approaches for brain-computer interfaces: a review and design space.用于脑机接口的可解释人工智能方法:综述与设计空间
J Neural Eng. 2024 Aug 8;21(4). doi: 10.1088/1741-2552/ad6593.
6
The Virtues of Interpretable Medical AI.可解释医学人工智能的优点
Camb Q Healthc Ethics. 2024 Jul;33(3):323-332. doi: 10.1017/S0963180122000664. Epub 2023 Jan 10.
7
A review of explainable and interpretable AI with applications in COVID-19 imaging.可解释和可理解的人工智能综述及其在 COVID-19 影像中的应用。
Med Phys. 2022 Jan;49(1):1-14. doi: 10.1002/mp.15359. Epub 2021 Dec 7.
8
Transparency of deep neural networks for medical image analysis: A review of interpretability methods.用于医学图像分析的深度神经网络透明度:可解释性方法综述
Comput Biol Med. 2022 Jan;140:105111. doi: 10.1016/j.compbiomed.2021.105111. Epub 2021 Dec 4.
9
Explainable artificial intelligence in emergency medicine: an overview.急诊医学中的可解释人工智能:综述
Clin Exp Emerg Med. 2023 Dec;10(4):354-362. doi: 10.15441/ceem.23.145. Epub 2023 Nov 28.
10
Explainable AI: A Review of Machine Learning Interpretability Methods.可解释人工智能:机器学习可解释性方法综述
Entropy (Basel). 2020 Dec 25;23(1):18. doi: 10.3390/e23010018.

引用本文的文献

1
Improving Explainability and Integrability of Medical AI to Promote Health Care Professional Acceptance and Use: Mixed Systematic Review.提高医学人工智能的可解释性和可整合性以促进医疗保健专业人员的接受和使用:混合系统评价
J Med Internet Res. 2025 Aug 7;27:e73374. doi: 10.2196/73374.
2
The ethics of using artificial intelligence in scientific research: new guidance needed for a new tool.科学研究中使用人工智能的伦理问题:新工具需要新指南。
AI Ethics. 2025 Apr;5(2):1499-1521. doi: 10.1007/s43681-024-00493-8. Epub 2024 May 27.
3
Model interpretability enhances domain generalization in the case of textual complexity modeling.

本文引用的文献

1
Neuro-symbolic approaches in artificial intelligence.人工智能中的神经符号方法。
Natl Sci Rev. 2022 Mar 4;9(6):nwac035. doi: 10.1093/nsr/nwac035. eCollection 2022 Jun.
2
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.停止为高风险决策解释黑箱机器学习模型,转而使用可解释模型。
Nat Mach Intell. 2019 May;1(5):206-215. doi: 10.1038/s42256-019-0048-x. Epub 2019 May 13.
3
When Artificial Intelligence Models Surpass Physician Performance: Medical Malpractice Liability in an Era of Advanced Artificial Intelligence.
在文本复杂性建模的情况下,模型可解释性增强了领域泛化能力。
Patterns (N Y). 2025 Feb 6;6(2):101177. doi: 10.1016/j.patter.2025.101177. eCollection 2025 Feb 14.
4
Opening the black box: challenges and opportunities regarding interpretability of artificial intelligence in emergency medicine.打开黑匣子:急诊医学中人工智能可解释性方面的挑战与机遇
CJEM. 2025 Feb;27(2):83-86. doi: 10.1007/s43678-024-00827-9. Epub 2025 Feb 17.
5
The ethical requirement of explainability for AI-DSS in healthcare: a systematic review of reasons.医疗保健中人工智能决策支持系统(AI-DSS)可解释性的伦理要求:原因的系统综述
BMC Med Ethics. 2024 Oct 1;25(1):104. doi: 10.1186/s12910-024-01103-2.
6
Artificial Intelligence-What to Expect From Machine Learning and Deep Learning in Hernia Surgery.人工智能——疝手术中机器学习和深度学习的展望
J Abdom Wall Surg. 2024 Sep 6;3:13059. doi: 10.3389/jaws.2024.13059. eCollection 2024.
7
Orchestrating explainable artificial intelligence for multimodal and longitudinal data in medical imaging.为医学影像中的多模态和纵向数据编排可解释人工智能。
NPJ Digit Med. 2024 Jul 22;7(1):195. doi: 10.1038/s41746-024-01190-w.
8
Rethinking Health Recommender Systems for Active Aging: An Autonomy-Based Ethical Analysis.重新思考主动老龄化的健康推荐系统:基于自主性的伦理分析。
Sci Eng Ethics. 2024 May 27;30(3):22. doi: 10.1007/s11948-024-00479-z.
9
Expectation management in AI: A framework for understanding stakeholder trust and acceptance of artificial intelligence systems.人工智能中的期望管理:理解利益相关者对人工智能系统信任与接受度的框架。
Heliyon. 2024 Mar 25;10(7):e28562. doi: 10.1016/j.heliyon.2024.e28562. eCollection 2024 Apr 15.
10
Nurses' perceptions, experience and knowledge regarding artificial intelligence: results from a cross-sectional online survey in Germany.护士对人工智能的认知、经验和知识:德国一项横断面在线调查的结果。
BMC Nurs. 2024 Mar 27;23(1):205. doi: 10.1186/s12912-024-01884-2.
当人工智能模型超越医生表现时:人工智能时代的医疗过失责任
J Am Coll Radiol. 2022 Jul;19(7):816-820. doi: 10.1016/j.jacr.2021.11.014. Epub 2022 Feb 1.
4
Interpreting Deep Machine Learning Models: An Easy Guide for Oncologists.解读深度机器学习模型:肿瘤学家简易指南
IEEE Rev Biomed Eng. 2023;16:192-207. doi: 10.1109/RBME.2021.3131358. Epub 2023 Jan 5.
5
A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI.可解释人工智能(XAI)研究综述:迈向医学 XAI
IEEE Trans Neural Netw Learn Syst. 2021 Nov;32(11):4793-4813. doi: 10.1109/TNNLS.2020.3027314. Epub 2021 Oct 27.
6
Continuous Learning AI in Radiology: Implementation Principles and Early Applications.放射学中的持续学习 AI:实施原则和早期应用。
Radiology. 2020 Oct;297(1):6-14. doi: 10.1148/radiol.2020200038. Epub 2020 Aug 25.
7
Concept attribution: Explaining CNN decisions to physicians.概念归因:向医生解释卷积神经网络的决策
Comput Biol Med. 2020 Aug;123:103865. doi: 10.1016/j.compbiomed.2020.103865. Epub 2020 Jun 17.
8
"Explaining" machine learning reveals policy challenges.“解释”机器学习揭示了政策挑战。
Science. 2020 Jun 26;368(6498):1433-1434. doi: 10.1126/science.aba9647.
9
Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians.医疗保健中的人工智能与人类信任:聚焦临床医生
J Med Internet Res. 2020 Jun 19;22(6):e15154. doi: 10.2196/15154.
10
On the Interpretability of Artificial Intelligence in Radiology: Challenges and Opportunities.论人工智能在放射学中的可解释性:挑战与机遇
Radiol Artif Intell. 2020 May 27;2(3):e190043. doi: 10.1148/ryai.2020190043.