• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人工智能可解释性:技术和伦理维度。

Artificial intelligence explainability: the technical and ethical dimensions.

机构信息

Department of Computer Science, University of York, Deramore Lane, York YO10 5GH, UK.

出版信息

Philos Trans A Math Phys Eng Sci. 2021 Oct 4;379(2207):20200363. doi: 10.1098/rsta.2020.0363. Epub 2021 Aug 16.

DOI:10.1098/rsta.2020.0363
PMID:34398656
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8366909/
Abstract

In recent years, several new technical methods have been developed to make AI-models more transparent and interpretable. These techniques are often referred to collectively as 'AI explainability' or 'XAI' methods. This paper presents an overview of XAI methods, and links them to stakeholder purposes for seeking an explanation. Because the underlying stakeholder purposes are broadly ethical in nature, we see this analysis as a contribution towards bringing together the technical and ethical dimensions of XAI. We emphasize that use of XAI methods must be linked to explanations of human decisions made during the development life cycle. Situated within that wider accountability framework, our analysis may offer a helpful starting point for designers, safety engineers, service providers and regulators who need to make practical judgements about which XAI methods to employ or to require. This article is part of the theme issue 'Towards symbiotic autonomous systems'.

摘要

近年来,已经开发出了几种新的技术方法,以使 AI 模型更加透明和可解释。这些技术通常统称为“AI 可解释性”或“XAI”方法。本文对 XAI 方法进行了概述,并将其与寻求解释的利益相关者目的联系起来。由于潜在的利益相关者目的在本质上具有广泛的伦理性质,我们认为这种分析有助于将 XAI 的技术和伦理层面结合起来。我们强调,必须将 XAI 方法的使用与在开发生命周期中做出的人类决策的解释联系起来。在更广泛的问责制框架内,我们的分析可以为设计师、安全工程师、服务提供商和监管机构提供一个有用的起点,他们需要对采用或要求哪种 XAI 方法做出实际判断。本文是主题为“共生自主系统”的特刊的一部分。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b72/8366909/31d2e5e3010f/rsta20200363f03.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b72/8366909/7ee1f147f492/rsta20200363f01.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b72/8366909/4d8b421bb75b/rsta20200363f02.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b72/8366909/31d2e5e3010f/rsta20200363f03.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b72/8366909/7ee1f147f492/rsta20200363f01.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b72/8366909/4d8b421bb75b/rsta20200363f02.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b72/8366909/31d2e5e3010f/rsta20200363f03.jpg

相似文献

1
Artificial intelligence explainability: the technical and ethical dimensions.人工智能可解释性:技术和伦理维度。
Philos Trans A Math Phys Eng Sci. 2021 Oct 4;379(2207):20200363. doi: 10.1098/rsta.2020.0363. Epub 2021 Aug 16.
2
Explanatory pragmatism: a context-sensitive framework for explainable medical AI.解释性实用主义:一个用于可解释医学人工智能的上下文敏感框架。
Ethics Inf Technol. 2022;24(1):13. doi: 10.1007/s10676-022-09632-3. Epub 2022 Feb 28.
3
A Survey on Medical Explainable AI (XAI): Recent Progress, Explainability Approach, Human Interaction and Scoring System.医学可解释人工智能(XAI)调查:最新进展、可解释性方法、人机交互和评分系统。
Sensors (Basel). 2022 Oct 21;22(20):8068. doi: 10.3390/s22208068.
4
Explainability and white box in drug discovery.药物发现中的可解释性和白盒。
Chem Biol Drug Des. 2023 Jul;102(1):217-233. doi: 10.1111/cbdd.14262. Epub 2023 Apr 27.
5
Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data.用于印度新冠疫情数据严重程度预测和症状分析的模型无关可解释人工智能工具。
Front Artif Intell. 2023 Dec 4;6:1272506. doi: 10.3389/frai.2023.1272506. eCollection 2023.
6
Explainability and causability in digital pathology.数字病理学中的可解释性和可归因性。
J Pathol Clin Res. 2023 Jul;9(4):251-260. doi: 10.1002/cjp2.322. Epub 2023 Apr 12.
7
Explainable AI for Bioinformatics: Methods, Tools and Applications.可解释人工智能在生物信息学中的应用:方法、工具与应用。
Brief Bioinform. 2023 Sep 20;24(5). doi: 10.1093/bib/bbad236.
8
A historical perspective of biomedical explainable AI research.生物医学可解释人工智能研究的历史视角。
Patterns (N Y). 2023 Sep 8;4(9):100830. doi: 10.1016/j.patter.2023.100830.
9
Explainability and Transparency of Classifiers for Air-Handling Unit Faults Using Explainable Artificial Intelligence (XAI).使用可解释人工智能(XAI)解释空气处理单元故障分类器的可解释性和透明度。
Sensors (Basel). 2022 Aug 23;22(17):6338. doi: 10.3390/s22176338.
10
Explainable artificial intelligence approaches for brain-computer interfaces: a review and design space.用于脑机接口的可解释人工智能方法:综述与设计空间
J Neural Eng. 2024 Aug 8;21(4). doi: 10.1088/1741-2552/ad6593.

引用本文的文献

1
Advanced feature engineering in Acute:Chronic Workload Ratio (ACWR) calculation for injury forecasting in elite soccer.精英足球运动中用于损伤预测的急性:慢性工作量比值(ACWR)计算中的高级特征工程
PLoS One. 2025 Jul 23;20(7):e0327960. doi: 10.1371/journal.pone.0327960. eCollection 2025.
2
The Acceptability of AI-Driven Resource Signposting to Young People Using a Mental Health Peer Support App.人工智能驱动的资源路标对使用心理健康同伴支持应用程序的年轻人的可接受性。
Digit Soc. 2025;4(2):45. doi: 10.1007/s44206-025-00202-w. Epub 2025 Jun 4.
3
Establishing and evaluating trustworthy AI: overview and research challenges.

本文引用的文献

1
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.停止为高风险决策解释黑箱机器学习模型,转而使用可解释模型。
Nat Mach Intell. 2019 May;1(5):206-215. doi: 10.1038/s42256-019-0048-x. Epub 2019 May 13.
2
Prediction of weaning from mechanical ventilation using Convolutional Neural Networks.使用卷积神经网络预测机械通气撤机。
Artif Intell Med. 2021 Jul;117:102087. doi: 10.1016/j.artmed.2021.102087. Epub 2021 May 5.
3
From Local Explanations to Global Understanding with Explainable AI for Trees.
建立和评估可信人工智能:概述与研究挑战
Front Big Data. 2024 Nov 29;7:1467222. doi: 10.3389/fdata.2024.1467222. eCollection 2024.
4
Progress Achieved, Landmarks, and Future Concerns in Biomedical and Health Informatics.生物医学与健康信息学的进展、里程碑及未来关注点
Healthcare (Basel). 2024 Oct 15;12(20):2041. doi: 10.3390/healthcare12202041.
5
Clinicians risk becoming 'liability sinks' for artificial intelligence.临床医生有成为人工智能“责任黑洞”的风险。
Future Healthc J. 2024 Feb 19;11(1):100007. doi: 10.1016/j.fhj.2024.100007. eCollection 2024 Mar.
6
Integrative approaches based on genomic techniques in the functional studies on enhancers.基于基因组技术的增强子功能研究的综合方法。
Brief Bioinform. 2023 Nov 22;25(1). doi: 10.1093/bib/bbad442.
7
Explainable AI as evidence of fair decisions.可解释人工智能作为公平决策的证据。
Front Psychol. 2023 Feb 14;14:1069426. doi: 10.3389/fpsyg.2023.1069426. eCollection 2023.
8
Assuring the safety of AI-based clinical decision support systems: a case study of the AI Clinician for sepsis treatment.确保基于人工智能的临床决策支持系统的安全性:以 AI Clinician 治疗脓毒症为例
BMJ Health Care Inform. 2022 Jul;29(1). doi: 10.1136/bmjhci-2022-100549.
利用可解释人工智能实现从局部解释到树木的全局理解
Nat Mach Intell. 2020 Jan;2(1):56-67. doi: 10.1038/s42256-019-0138-9. Epub 2020 Jan 17.
4
Artificial intelligence in health care: accountability and safety.人工智能在医疗保健中的应用:责任与安全。
Bull World Health Organ. 2020 Apr 1;98(4):251-256. doi: 10.2471/BLT.19.237487. Epub 2020 Feb 25.
5
Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review.用于图像分类的深度卷积神经网络:全面综述
Neural Comput. 2017 Sep;29(9):2352-2449. doi: 10.1162/NECO_a_00990. Epub 2017 Jun 9.
6
MIMIC-III, a freely accessible critical care database.MIMIC-III,一个免费获取的重症监护数据库。
Sci Data. 2016 May 24;3:160035. doi: 10.1038/sdata.2016.35.
7
Improvement in the Prediction of Ventilator Weaning Outcomes by an Artificial Neural Network in a Medical ICU.医学重症监护病房中人工神经网络对机械通气撤机结果预测的改善
Respir Care. 2015 Nov;60(11):1560-9. doi: 10.4187/respcare.03648. Epub 2015 Sep 1.
8
ICU occupancy and mechanical ventilator use in the United States.美国 ICU 入住率和机械通气使用情况。
Crit Care Med. 2013 Dec;41(12):2712-9. doi: 10.1097/CCM.0b013e318298a139.
9
The difficult-to-wean patient.难以撤机的患者。
Expert Rev Respir Med. 2010 Oct;4(5):685-92. doi: 10.1586/ers.10.58.
10
The Richmond Agitation-Sedation Scale: validity and reliability in adult intensive care unit patients.里士满躁动镇静量表:在成人重症监护病房患者中的效度和信度
Am J Respir Crit Care Med. 2002 Nov 15;166(10):1338-44. doi: 10.1164/rccm.2107138.