Boge Florian, Mosig Axel
Institute for Philosophy and Political Science, Technical University Dortmund, Emil-Figge-Str. 50, 44227, Dortmund, Germany.
Bioinformatics Group, Department for Biology and Biotechnology, Ruhr-University Bochum (RUB), Gesundheitscampus 4, 44801, Bochum, NRW, Germany.
Pflugers Arch. 2025 Apr;477(4):543-554. doi: 10.1007/s00424-024-03033-9. Epub 2024 Oct 29.
With rapid advances of deep neural networks over the past decade, artificial intelligence (AI) systems are now commonplace in many applications in biomedicine. These systems often achieve high predictive accuracy in clinical studies, and increasingly in clinical practice. Yet, despite their commonly high predictive accuracy, the trustworthiness of AI systems needs to be questioned when it comes to decision-making that affects the well-being of patients or the fairness towards patients or other stakeholders affected by AI-based decisions. To address this, the field of explainable artificial intelligence, or XAI for short, has emerged, seeking to provide means by which AI-based decisions can be explained to experts, users, or other stakeholders. While it is commonly claimed that explanations of artificial intelligence (AI) establish the trustworthiness of AI-based decisions, it remains unclear what traits of explanations cause them to foster trustworthiness. Building on historical cases of scientific explanation in medicine, we here propagate our perspective that, in order to foster trustworthiness, explanations in biomedical AI should meet the criteria of being scientific explanations. To further undermine our approach, we discuss its relation to the concepts of causality and randomized intervention. In our perspective, we combine aspects from the three disciplines of biomedicine, machine learning, and philosophy. From this interdisciplinary angle, we shed light on how the explanation and trustworthiness of artificial intelligence relate to the concepts of causality and robustness. To connect our perspective with AI research practice, we review recent cases of AI-based studies in pathology and, finally, provide guidelines on how to connect AI in biomedicine with scientific explanation.
在过去十年中,随着深度神经网络的迅速发展,人工智能(AI)系统如今在生物医学的许多应用中已司空见惯。这些系统在临床研究中常常能达到很高的预测准确率,并且在临床实践中的应用也越来越多。然而,尽管它们通常具有较高的预测准确率,但在涉及影响患者福祉或对受人工智能决策影响的患者或其他利益相关者的公平性的决策时,人工智能系统的可信度仍值得质疑。为了解决这个问题,可解释人工智能领域(简称XAI)应运而生,旨在提供向专家、用户或其他利益相关者解释基于人工智能的决策的方法。虽然人们普遍认为对人工智能(AI)的解释能确立基于人工智能的决策的可信度,但尚不清楚解释的哪些特征会促使它们增强可信度。基于医学中科学解释的历史案例,我们在此阐述我们的观点,即,为了增强可信度,生物医学人工智能中的解释应符合科学解释的标准。为了进一步支持我们的方法,我们讨论了它与因果关系和随机干预概念的关系。在我们看来,我们结合了生物医学、机器学习和哲学这三个学科的方面。从这个跨学科的角度,我们阐明了人工智能的解释和可信度如何与因果关系和稳健性的概念相关联。为了将我们的观点与人工智能研究实践联系起来,我们回顾了病理学中基于人工智能的研究的近期案例,最后提供了关于如何将生物医学中的人工智能与科学解释相联系的指导方针。