Freyer Nils, Groß Dominik, Lipprandt Myriam
Institute of Medical Informatics, Medical Faculty, RWTH Aachen University, Aachen, Germany.
Institute for the History, Theory and Ethics of Medicine, Medical Faculty, RWTH Aachen University, Aachen, Germany.
BMC Med Ethics. 2024 Oct 1;25(1):104. doi: 10.1186/s12910-024-01103-2.
Despite continuous performance improvements, especially in clinical contexts, a major challenge of Artificial Intelligence based Decision Support Systems (AI-DSS) remains their degree of epistemic opacity. The conditions of and the solutions for the justified use of the occasionally unexplainable technology in healthcare are an active field of research. In March 2024, the European Union agreed upon the Artificial Intelligence Act (AIA), requiring medical AI-DSS to be ad-hoc explainable or to use post-hoc explainability methods. The ethical debate does not seem to settle on this requirement yet. This systematic review aims to outline and categorize the positions and arguments in the ethical debate.
We conducted a literature search on PubMed, BASE, and Scopus for English-speaking scientific peer-reviewed publications from 2016 to 2024. The inclusion criterion was to give explicit requirements of explainability for AI-DSS in healthcare and reason for it. Non-domain-specific documents, as well as surveys, reviews, and meta-analyses were excluded. The ethical requirements for explainability outlined in the documents were qualitatively analyzed with respect to arguments for the requirement of explainability and the required level of explainability.
The literature search resulted in 1662 documents; 44 documents were included in the review after eligibility screening of the remaining full texts. Our analysis showed that 17 records argue in favor of the requirement of explainable AI methods (xAI) or ad-hoc explainable models, providing 9 categories of arguments. The other 27 records argued against a general requirement, providing 11 categories of arguments. Also, we found that 14 works advocate the need for context-dependent levels of explainability, as opposed to 30 documents, arguing for context-independent, absolute standards.
The systematic review of reasons shows no clear agreement on the requirement of post-hoc explainability methods or ad-hoc explainable models for AI-DSS in healthcare. The arguments found in the debate were referenced and responded to from different perspectives, demonstrating an interactive discourse. Policymakers and researchers should watch the development of the debate closely. Conversely, ethicists should be well informed by empirical and technical research, given the frequency of advancements in the field.
尽管人工智能决策支持系统(AI-DSS)的性能不断提升,尤其是在临床环境中,但该技术在认知层面的不透明性仍是一个重大挑战。在医疗保健领域合理使用这种偶尔无法解释的技术的条件和解决方案是一个活跃的研究领域。2024年3月,欧盟通过了《人工智能法案》(AIA),要求医疗AI-DSS具备即时可解释性或采用事后可解释性方法。然而,伦理辩论似乎尚未就这一要求达成共识。本系统综述旨在梳理和归类伦理辩论中的立场与论点。
我们在PubMed、BASE和Scopus数据库中进行文献检索,查找2016年至2024年期间英语发表的科学同行评审出版物。纳入标准是明确给出医疗保健领域AI-DSS可解释性的要求及其原因。非特定领域的文献以及调查、综述和荟萃分析被排除。对文献中概述的可解释性伦理要求,从可解释性要求的论据和所需的可解释性水平方面进行定性分析。
文献检索共得到1662篇文献;对其余全文进行资格筛选后,44篇文献被纳入综述。我们的分析表明,17篇记录支持可解释人工智能方法(xAI)或即时可解释模型的要求,提供了9类论据。另外27篇记录反对一般性要求,提供了11类论据。此外,我们发现14篇著作主张需要依赖上下文的可解释性水平,而30篇文献主张采用不依赖上下文的绝对标准。
对相关理由的系统综述表明,对于医疗保健领域AI-DSS采用事后可解释性方法或即时可解释模型的要求,尚未达成明确共识。辩论中的论据从不同角度被引用和回应,呈现出一种互动性的讨论。政策制定者和研究人员应密切关注辩论的发展。相反,鉴于该领域进展频繁,伦理学家应充分了解实证和技术研究。