• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

可争辩 AI 诊断的四个维度 - 以患者为中心的可解释 AI 方法。

The four dimensions of contestable AI diagnostics - A patient-centric approach to explainable AI.

机构信息

Aalborg University, Centre for Applied Ethics and Philosophy of Science, Department of Communication and Psychology, A. C. Meyers Vænge 15, 2450 Copenhagen SV, Denmark.

University of Manchester, Centre for Social Ethics and Policy, School of Law, Manchester M13 9 PL, United Kingdom; Center for Medical Ethics, Faculty of Medicine, University of Oslo, Norway.

出版信息

Artif Intell Med. 2020 Jul;107:101901. doi: 10.1016/j.artmed.2020.101901. Epub 2020 Jun 9.

DOI:10.1016/j.artmed.2020.101901
PMID:32828448
Abstract

The problem of the explainability of AI decision-making has attracted considerable attention in recent years. In considering AI diagnostics we suggest that explainability should be explicated as 'effective contestability'. Taking a patient-centric approach we argue that patients should be able to contest the diagnoses of AI diagnostic systems, and that effective contestation of patient-relevant aspect of AI diagnoses requires the availability of different types of information about 1) the AI system's use of data, 2) the system's potential biases, 3) the system performance, and 4) the division of labour between the system and health care professionals. We justify and define thirteen specific informational requirements that follows from 'contestability'. We further show not only that contestability is a weaker requirement than some of the proposed criteria of explainability, but also that it does not introduce poorly grounded double standards for AI and health care professionals' diagnostics, and does not come at the cost of AI system performance. Finally, we briefly discuss whether the contestability requirements introduced here are domain-specific.

摘要

近年来,人工智能决策的可解释性问题引起了相当大的关注。在考虑人工智能诊断时,我们建议将可解释性解释为“有效可争议性”。我们采取以患者为中心的方法,认为患者应该能够对人工智能诊断系统的诊断提出质疑,并且有效质疑与患者相关的人工智能诊断方面的问题需要提供以下不同类型的信息:1)人工智能系统使用的数据;2)系统的潜在偏差;3)系统性能;4)系统和医疗保健专业人员之间的分工。我们从“可争议性”中推导出了十三个具体的信息要求,并对其进行了论证和定义。我们进一步表明,可争议性不仅比一些提出的可解释性标准弱,而且不会为人工智能和医疗保健专业人员的诊断引入毫无根据的双重标准,也不会以牺牲人工智能系统性能为代价。最后,我们简要讨论了这里引入的可争议性要求是否具有领域特异性。

相似文献

1
The four dimensions of contestable AI diagnostics - A patient-centric approach to explainable AI.可争辩 AI 诊断的四个维度 - 以患者为中心的可解释 AI 方法。
Artif Intell Med. 2020 Jul;107:101901. doi: 10.1016/j.artmed.2020.101901. Epub 2020 Jun 9.
2
Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey.人群对医疗人工智能性能和可解释性的偏好:基于选择的联合调查。
J Med Internet Res. 2021 Dec 13;23(12):e26611. doi: 10.2196/26611.
3
The false hope of current approaches to explainable artificial intelligence in health care.当前医疗保健中可解释人工智能方法的虚假希望。
Lancet Digit Health. 2021 Nov;3(11):e745-e750. doi: 10.1016/S2589-7500(21)00208-9.
4
The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies.可解释性在医疗保健人工智能可信性构建中的作用:术语、设计选择和评估策略的全面调查。
J Biomed Inform. 2021 Jan;113:103655. doi: 10.1016/j.jbi.2020.103655. Epub 2020 Dec 10.
5
The ethical requirement of explainability for AI-DSS in healthcare: a systematic review of reasons.医疗保健中人工智能决策支持系统(AI-DSS)可解释性的伦理要求:原因的系统综述
BMC Med Ethics. 2024 Oct 1;25(1):104. doi: 10.1186/s12910-024-01103-2.
6
Explainability and causability for artificial intelligence-supported medical image analysis in the context of the European In Vitro Diagnostic Regulation.人工智能支持的医学图像分析在欧洲体外诊断法规背景下的可解释性和可归因性。
N Biotechnol. 2022 Sep 25;70:67-72. doi: 10.1016/j.nbt.2022.05.002. Epub 2022 May 6.
7
Trading off accuracy and explainability in AI decision-making: findings from 2 citizens' juries.权衡人工智能决策中的准确性和可解释性:来自 2 个公民陪审团的发现。
J Am Med Inform Assoc. 2021 Sep 18;28(10):2128-2138. doi: 10.1093/jamia/ocab127.
8
Explainable AI in medical imaging: An overview for clinical practitioners - Saliency-based XAI approaches.可解释人工智能在医学影像中的应用:临床医师的概述——基于显著度的 XAI 方法。
Eur J Radiol. 2023 May;162:110787. doi: 10.1016/j.ejrad.2023.110787. Epub 2023 Mar 21.
9
Towards a Knowledge Graph-Based Explainable Decision Support System in Healthcare.迈向医疗保健领域基于知识图谱的可解释决策支持系统。
Stud Health Technol Inform. 2021 May 27;281:502-503. doi: 10.3233/SHTI210215.
10
Re-focusing explainability in medicine.重新聚焦医学中的可解释性。
Digit Health. 2022 Feb 11;8:20552076221074488. doi: 10.1177/20552076221074488. eCollection 2022 Jan-Dec.

引用本文的文献

1
Personalized health monitoring using explainable AI: bridging trust in predictive healthcare.使用可解释人工智能的个性化健康监测:弥合对预测性医疗保健的信任差距。
Sci Rep. 2025 Aug 29;15(1):31892. doi: 10.1038/s41598-025-15867-z.
2
Contestable AI for criminal intelligence analysis: improving decision-making through semantic modeling and human oversight.用于犯罪情报分析的可争议人工智能:通过语义建模和人工监督改进决策
Front Artif Intell. 2025 Jul 1;8:1602998. doi: 10.3389/frai.2025.1602998. eCollection 2025.
3
The need for patient rights in AI-driven healthcare - risk-based regulation is not enough.
人工智能驱动的医疗保健中患者权利的必要性——基于风险的监管是不够的。
J R Soc Med. 2025 Jun 25:1410768251344707. doi: 10.1177/01410768251344707.
4
Proactive vs. passive algorithmic ethics practices in healthcare: the moderating role of healthcare engagement type in patients' responses.医疗保健中主动与被动算法伦理实践:医疗保健参与类型在患者反应中的调节作用。
BMC Med Ethics. 2025 Jun 7;26(1):73. doi: 10.1186/s12910-025-01236-y.
5
Machine learning innovations in CPR: a comprehensive survey on enhanced resuscitation techniques.心肺复苏中的机器学习创新:关于强化复苏技术的全面综述
Artif Intell Rev. 2025;58(8):233. doi: 10.1007/s10462-025-11214-w. Epub 2025 May 5.
6
The ethical requirement of explainability for AI-DSS in healthcare: a systematic review of reasons.医疗保健中人工智能决策支持系统(AI-DSS)可解释性的伦理要求:原因的系统综述
BMC Med Ethics. 2024 Oct 1;25(1):104. doi: 10.1186/s12910-024-01103-2.
7
Patient Consent and The Right to Notice and Explanation of AI Systems Used in Health Care.患者同意以及医疗保健中使用的人工智能系统的告知与解释权。
Am J Bioeth. 2025 Mar;25(3):102-114. doi: 10.1080/15265161.2024.2399828. Epub 2024 Sep 17.
8
Unveiling the black box: imperative for explainable AI in cardiovascular disease prevention.揭开黑匣子:心血管疾病预防中可解释人工智能的必要性。
Lancet Reg Health West Pac. 2024 Jul 13;48:101145. doi: 10.1016/j.lanwpc.2024.101145. eCollection 2024 Jul.
9
Digital pathology implementation in cancer diagnostics: towards informed decision-making.数字病理学在癌症诊断中的应用:迈向明智决策
Front Digit Health. 2024 May 30;6:1358305. doi: 10.3389/fdgth.2024.1358305. eCollection 2024.
10
Attitude and Understanding of Artificial Intelligence Among Saudi Medical Students: An Online Cross-Sectional Study.沙特医学生对人工智能的态度与理解:一项在线横断面研究。
J Multidiscip Healthc. 2024 Apr 29;17:1887-1899. doi: 10.2147/JMDH.S455260. eCollection 2024.