Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, 45131 Essen, Germany; Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147 Essen, Germany.
Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, 45131 Essen, Germany.
Eur J Radiol. 2023 May;162:110787. doi: 10.1016/j.ejrad.2023.110787. Epub 2023 Mar 21.
Since recent achievements of Artificial Intelligence (AI) have proven significant success and promising results throughout many fields of application during the last decade, AI has also become an essential part of medical research. The improving data availability, coupled with advances in high-performance computing and innovative algorithms, has increased AI's potential in various aspects. Because AI rapidly reshapes research and promotes the development of personalized clinical care, alongside its implementation arises an urgent need for a deep understanding of its inner workings, especially in high-stake domains. However, such systems can be highly complex and opaque, limiting the possibility of an immediate understanding of the system's decisions. Regarding the medical field, a high impact is attributed to these decisions as physicians and patients can only fully trust AI systems when reasonably communicating the origin of their results, simultaneously enabling the identification of errors and biases. Explainable AI (XAI), becoming an increasingly important field of research in recent years, promotes the formulation of explainability methods and provides a rationale allowing users to comprehend the results generated by AI systems. In this paper, we investigate the application of XAI in medical imaging, addressing a broad audience, especially healthcare professionals. The content focuses on definitions and taxonomies, standard methods and approaches, advantages, limitations, and examples representing the current state of research regarding XAI in medical imaging. This paper focuses on saliency-based XAI methods, where the explanation can be provided directly on the input data (image) and which naturally are of special importance in medical imaging.
由于人工智能 (AI) 在过去十年中的各个应用领域都取得了显著的成功和有前途的成果,因此 AI 也成为医学研究的重要组成部分。随着数据可用性的提高,以及高性能计算和创新算法的进步,AI 在各个方面的潜力都得到了提升。由于 AI 迅速重塑了研究并推动了个性化临床护理的发展,因此迫切需要深入了解其内部运作机制,特别是在高风险领域。然而,这些系统可能非常复杂和不透明,限制了对系统决策的即时理解。在医疗领域,这些决策的影响非常大,只有当医生和患者能够合理地解释其结果的来源,同时能够识别错误和偏差时,他们才能完全信任 AI 系统。可解释人工智能 (XAI) 近年来成为一个越来越重要的研究领域,它推动了可解释性方法的制定,并提供了一种推理,使用户能够理解 AI 系统生成的结果。本文研究了 XAI 在医学成像中的应用,面向广泛的受众,特别是医疗保健专业人员。本文的重点是定义和分类、标准方法和方法、优势、局限性以及代表医学成像中 XAI 当前研究状态的示例。本文重点介绍了基于显着性的 XAI 方法,其中解释可以直接在输入数据(图像)上提供,这在医学成像中具有特殊重要性。