Suppr超能文献

论临床人工智能可解释性的实践、伦理和法律必要性:关键论点审视

On the practical, ethical, and legal necessity of clinical Artificial Intelligence explainability: an examination of key arguments.

作者信息

Blackman Justin, Veerapen Richard

机构信息

Island Medical Program, Faculty of Medicine, University of British Columbia, University of Victoria, Victoria, BC, Canada.

School of Health Information Science, University of Victoria, Victoria, BC, Canada.

出版信息

BMC Med Inform Decis Mak. 2025 Mar 5;25(1):111. doi: 10.1186/s12911-025-02891-2.

Abstract

The necessity for explainability of artificial intelligence technologies in medical applications has been widely discussed and heavily debated within the literature. This paper comprises a systematized review of the arguments supporting and opposing this purported necessity. Both sides of the debate within the literature are quoted to synthesize discourse on common recurring themes and subsequently critically analyze and respond to it. While the use of autonomous black box algorithms is compellingly discouraged, the same cannot be said for the whole of medical artificial intelligence technologies that lack explainability. We contribute novel comparisons of unexplainable clinical artificial intelligence tools, diagnosis of idiopathy, and diagnoses by exclusion, to analyze implications on patient autonomy and informed consent. Applying a novel approach using comparisons with clinical practice guidelines, we contest the claim that lack of explainability compromises clinician due diligence and undermines epistemological responsibility. We find it problematic that many arguments in favour of the practical, ethical, or legal necessity of clinical artificial intelligence explainability conflate the use of unexplainable AI with automated decision making, or equate the use of clinical artificial intelligence with the exclusive use of clinical artificial intelligence.

摘要

人工智能技术在医学应用中可解释性的必要性已在文献中得到广泛讨论和激烈辩论。本文对支持和反对这种所谓必要性的论点进行了系统综述。文中引用了辩论双方的观点,以综合关于常见反复出现主题的论述,随后对其进行批判性分析和回应。虽然强烈不鼓励使用自主黑箱算法,但对于缺乏可解释性的整个医学人工智能技术而言,情况并非如此。我们对难以解释的临床人工智能工具、特发性疾病的诊断以及排除性诊断进行了新颖的比较,以分析对患者自主性和知情同意的影响。通过采用与临床实践指南进行比较的新颖方法,我们对缺乏可解释性会损害临床医生的应尽职责并破坏认识论责任这一说法提出质疑。我们发现,许多支持临床人工智能可解释性的实际、伦理或法律必要性的论点,将难以解释的人工智能的使用与自动化决策混为一谈,或者将临床人工智能的使用等同于仅使用临床人工智能,这是有问题的。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb0e/11881432/acae6b6481bc/12911_2025_2891_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验