• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

论临床人工智能可解释性的实践、伦理和法律必要性:关键论点审视

On the practical, ethical, and legal necessity of clinical Artificial Intelligence explainability: an examination of key arguments.

作者信息

Blackman Justin, Veerapen Richard

机构信息

Island Medical Program, Faculty of Medicine, University of British Columbia, University of Victoria, Victoria, BC, Canada.

School of Health Information Science, University of Victoria, Victoria, BC, Canada.

出版信息

BMC Med Inform Decis Mak. 2025 Mar 5;25(1):111. doi: 10.1186/s12911-025-02891-2.

DOI:10.1186/s12911-025-02891-2
PMID:40045339
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11881432/
Abstract

The necessity for explainability of artificial intelligence technologies in medical applications has been widely discussed and heavily debated within the literature. This paper comprises a systematized review of the arguments supporting and opposing this purported necessity. Both sides of the debate within the literature are quoted to synthesize discourse on common recurring themes and subsequently critically analyze and respond to it. While the use of autonomous black box algorithms is compellingly discouraged, the same cannot be said for the whole of medical artificial intelligence technologies that lack explainability. We contribute novel comparisons of unexplainable clinical artificial intelligence tools, diagnosis of idiopathy, and diagnoses by exclusion, to analyze implications on patient autonomy and informed consent. Applying a novel approach using comparisons with clinical practice guidelines, we contest the claim that lack of explainability compromises clinician due diligence and undermines epistemological responsibility. We find it problematic that many arguments in favour of the practical, ethical, or legal necessity of clinical artificial intelligence explainability conflate the use of unexplainable AI with automated decision making, or equate the use of clinical artificial intelligence with the exclusive use of clinical artificial intelligence.

摘要

人工智能技术在医学应用中可解释性的必要性已在文献中得到广泛讨论和激烈辩论。本文对支持和反对这种所谓必要性的论点进行了系统综述。文中引用了辩论双方的观点,以综合关于常见反复出现主题的论述,随后对其进行批判性分析和回应。虽然强烈不鼓励使用自主黑箱算法,但对于缺乏可解释性的整个医学人工智能技术而言,情况并非如此。我们对难以解释的临床人工智能工具、特发性疾病的诊断以及排除性诊断进行了新颖的比较,以分析对患者自主性和知情同意的影响。通过采用与临床实践指南进行比较的新颖方法,我们对缺乏可解释性会损害临床医生的应尽职责并破坏认识论责任这一说法提出质疑。我们发现,许多支持临床人工智能可解释性的实际、伦理或法律必要性的论点,将难以解释的人工智能的使用与自动化决策混为一谈,或者将临床人工智能的使用等同于仅使用临床人工智能,这是有问题的。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb0e/11881432/780bd5d8386e/12911_2025_2891_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb0e/11881432/acae6b6481bc/12911_2025_2891_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb0e/11881432/3c69cbdb1d93/12911_2025_2891_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb0e/11881432/0bfa360d278b/12911_2025_2891_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb0e/11881432/780bd5d8386e/12911_2025_2891_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb0e/11881432/acae6b6481bc/12911_2025_2891_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb0e/11881432/3c69cbdb1d93/12911_2025_2891_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb0e/11881432/0bfa360d278b/12911_2025_2891_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb0e/11881432/780bd5d8386e/12911_2025_2891_Fig4_HTML.jpg

相似文献

1
On the practical, ethical, and legal necessity of clinical Artificial Intelligence explainability: an examination of key arguments.论临床人工智能可解释性的实践、伦理和法律必要性:关键论点审视
BMC Med Inform Decis Mak. 2025 Mar 5;25(1):111. doi: 10.1186/s12911-025-02891-2.
2
Explainability for artificial intelligence in healthcare: a multidisciplinary perspective.人工智能在医疗保健中的可解释性:多学科视角。
BMC Med Inform Decis Mak. 2020 Nov 30;20(1):310. doi: 10.1186/s12911-020-01332-6.
3
The ethical requirement of explainability for AI-DSS in healthcare: a systematic review of reasons.医疗保健中人工智能决策支持系统(AI-DSS)可解释性的伦理要求:原因的系统综述
BMC Med Ethics. 2024 Oct 1;25(1):104. doi: 10.1186/s12910-024-01103-2.
4
Artificial intelligence in medicine: Ethical, social and legal perspectives.人工智能在医学中的伦理、社会和法律视角。
Ann Acad Med Singap. 2023 Dec 28;52(12):695-699. doi: 10.47102/annals-acadmedsg.2023272.
5
Believing in black boxes: machine learning for healthcare does not need explainability to be evidence-based.相信黑盒:医疗保健的机器学习不需要可解释性即可成为基于证据的。
J Clin Epidemiol. 2022 Feb;142:252-257. doi: 10.1016/j.jclinepi.2021.11.001. Epub 2021 Nov 5.
6
What Is the Role of Explainability in Medical Artificial Intelligence? A Case-Based Approach.可解释性在医学人工智能中的作用是什么?一种基于案例的方法。
Bioengineering (Basel). 2025 Apr 2;12(4):375. doi: 10.3390/bioengineering12040375.
7
The ethical, legal and social implications of using artificial intelligence systems in breast cancer care.人工智能系统在乳腺癌护理中的伦理、法律和社会影响。
Breast. 2020 Feb;49:25-32. doi: 10.1016/j.breast.2019.10.001. Epub 2019 Oct 11.
8
Ethical and legal challenges of medical AI on informed consent: China as an example.以中国为例探讨医学人工智能在知情同意方面的伦理和法律挑战。
Dev World Bioeth. 2025 Mar;25(1):46-54. doi: 10.1111/dewb.12442. Epub 2024 Jan 19.
9
Neurosurgery, Explainable AI, and Legal Liability.神经外科学、可解释人工智能和法律责任。
Adv Exp Med Biol. 2024;1462:543-553. doi: 10.1007/978-3-031-64892-2_34.
10
Legal and ethical considerations of artificial intelligence in skin cancer diagnosis.人工智能在皮肤癌诊断中的法律和伦理问题。
Australas J Dermatol. 2022 Feb;63(1):e1-e5. doi: 10.1111/ajd.13690. Epub 2021 Aug 18.

引用本文的文献

1
Artificial Intelligence in Orthopedic Surgery: Current Applications, Challenges, and Future Directions.骨科手术中的人工智能:当前应用、挑战及未来方向。
MedComm (2020). 2025 Jun 25;6(7):e70260. doi: 10.1002/mco2.70260. eCollection 2025 Jul.
2
Nanomaterials reshape the pulmonary mechanical microenvironment: novel therapeutic strategies for respiratory diseases.纳米材料重塑肺部力学微环境:呼吸系统疾病的新型治疗策略。
Front Bioeng Biotechnol. 2025 May 2;13:1597387. doi: 10.3389/fbioe.2025.1597387. eCollection 2025.

本文引用的文献

1
The ethical requirement of explainability for AI-DSS in healthcare: a systematic review of reasons.医疗保健中人工智能决策支持系统(AI-DSS)可解释性的伦理要求:原因的系统综述
BMC Med Ethics. 2024 Oct 1;25(1):104. doi: 10.1186/s12910-024-01103-2.
2
Heterogeneity and predictors of the effects of AI assistance on radiologists.人工智能辅助对放射科医生影响的异质性和预测因素。
Nat Med. 2024 Mar;30(3):837-849. doi: 10.1038/s41591-024-02850-w. Epub 2024 Mar 19.
3
Ethical Considerations for Artificial Intelligence in Medical Imaging: Deployment and Governance.
人工智能在医学影像中的伦理考量:部署与治理。
J Nucl Med. 2023 Oct;64(10):1509-1515. doi: 10.2967/jnumed.123.266110. Epub 2023 Aug 24.
4
Editorial: Transparent machine learning in bio-medicine.社论:生物医学中的透明机器学习
Front Bioinform. 2023 Aug 4;3:1264803. doi: 10.3389/fbinf.2023.1264803. eCollection 2023.
5
Bioethics Without Theory?没有理论的生物伦理学?
Camb Q Healthc Ethics. 2024 Apr;33(2):159-166. doi: 10.1017/S0963180123000348. Epub 2023 Jul 28.
6
On the Justified Use of AI Decision Support in Evidence-Based Medicine: Validity, Explainability, and Responsibility.论人工智能决策支持在循证医学中的合理应用:有效性、可解释性与责任
Camb Q Healthc Ethics. 2023 Jun 9:1-7. doi: 10.1017/S0963180123000294.
7
Black-box assisted medical decisions: AI power vs. ethical physician care.黑盒辅助医疗决策:人工智能的力量与合乎伦理的医师关怀。
Med Health Care Philos. 2023 Sep;26(3):285-292. doi: 10.1007/s11019-023-10153-z. Epub 2023 Jun 5.
8
Artificial Intelligence Algorithms Need to Be Explainable-or Do They?人工智能算法需要具备可解释性——还是并非如此?
J Nucl Med. 2023 Jun;64(6):976-977. doi: 10.2967/jnumed.122.264949. Epub 2023 Apr 28.
9
Racial bias in cesarean decision-making.剖宫产决策中的种族偏见。
Am J Obstet Gynecol MFM. 2023 May;5(5):100927. doi: 10.1016/j.ajogmf.2023.100927. Epub 2023 Mar 14.
10
"Just" accuracy? Procedural fairness demands explainability in AI-based medical resource allocations.仅仅是“准确性”吗?程序公平性要求在基于人工智能的医疗资源分配中具备可解释性。
AI Soc. 2022 Dec 21:1-12. doi: 10.1007/s00146-022-01614-9.