Borys Katarzyna, Schmitt Yasmin Alyssa, Nauta Meike, Seifert Christin, Krämer Nicole, Friedrich Christoph M, Nensa Felix
Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, 45131 Essen, Germany; Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147 Essen, Germany.
Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, 45131 Essen, Germany.
Eur J Radiol. 2023 May;162:110786. doi: 10.1016/j.ejrad.2023.110786. Epub 2023 Mar 20.
Driven by recent advances in Artificial Intelligence (AI) and Computer Vision (CV), the implementation of AI systems in the medical domain increased correspondingly. This is especially true for the domain of medical imaging, in which the incorporation of AI aids several imaging-based tasks such as classification, segmentation, and registration. Moreover, AI reshapes medical research and contributes to the development of personalized clinical care. Consequently, alongside its extended implementation arises the need for an extensive understanding of AI systems and their inner workings, potentials, and limitations which the field of eXplainable AI (XAI) aims at. Because medical imaging is mainly associated with visual tasks, most explainability approaches incorporate saliency-based XAI methods. In contrast to that, in this article we would like to investigate the full potential of XAI methods in the field of medical imaging by specifically focusing on XAI techniques not relying on saliency, and providing diversified examples. We dedicate our investigation to a broad audience, but particularly healthcare professionals. Moreover, this work aims at establishing a common ground for cross-disciplinary understanding and exchange across disciplines between Deep Learning (DL) builders and healthcare professionals, which is why we aimed for a non-technical overview. Presented XAI methods are divided by a method's output representation into the following categories: Case-based explanations, textual explanations, and auxiliary explanations.
在人工智能(AI)和计算机视觉(CV)最近取得的进展推动下,AI系统在医学领域的应用相应增加。在医学成像领域尤其如此,其中AI的融入有助于多项基于成像的任务,如分类、分割和配准。此外,AI重塑了医学研究,并有助于个性化临床护理的发展。因此,随着AI系统的广泛应用,人们需要深入了解AI系统及其内部运作、潜力和局限性,而可解释人工智能(XAI)领域正是致力于此。由于医学成像主要与视觉任务相关,大多数可解释性方法都采用基于显著性的XAI方法。与此不同的是,在本文中,我们希望通过特别关注不依赖显著性的XAI技术并提供多样化的示例,来探究XAI方法在医学成像领域的全部潜力。我们的研究面向广大受众,尤其是医疗保健专业人员。此外,这项工作旨在为深度学习(DL)开发者和医疗保健专业人员之间跨学科的理解和交流建立一个共同基础,这就是我们追求非技术性概述的原因。所展示的XAI方法根据方法的输出表示分为以下几类:基于案例的解释、文本解释和辅助解释。