Muhammad Dost, Bendechache Malika
ADAPT Research Centre, School of Computer Science, University of Galway, Galway, Ireland.
Comput Struct Biotechnol J. 2024 Aug 12;24:542-560. doi: 10.1016/j.csbj.2024.08.005. eCollection 2024 Dec.
This systematic literature review examines state-of-the-art Explainable Artificial Intelligence (XAI) methods applied to medical image analysis, discussing current challenges and future research directions, and exploring evaluation metrics used to assess XAI approaches. With the growing efficiency of Machine Learning (ML) and Deep Learning (DL) in medical applications, there's a critical need for adoption in healthcare. However, their "black-box" nature, where decisions are made without clear explanations, hinders acceptance in clinical settings where decisions have significant medicolegal consequences. Our review highlights the advanced XAI methods, identifying how they address the need for transparency and trust in ML/DL decisions. We also outline the challenges faced by these methods and propose future research directions to improve XAI in healthcare. This paper aims to bridge the gap between cutting-edge computational techniques and their practical application in healthcare, nurturing a more transparent, trustworthy, and effective use of AI in medical settings. The insights guide both research and industry, promoting innovation and standardisation in XAI implementation in healthcare.
本系统文献综述考察了应用于医学图像分析的最新可解释人工智能(XAI)方法,讨论了当前面临的挑战和未来的研究方向,并探讨了用于评估XAI方法的评估指标。随着机器学习(ML)和深度学习(DL)在医学应用中的效率不断提高,医疗保健领域迫切需要采用这些技术。然而,它们的“黑箱”性质,即在没有清晰解释的情况下做出决策,阻碍了它们在具有重大法医学后果的临床环境中的应用。我们的综述重点介绍了先进的XAI方法,确定了它们如何满足对ML/DL决策透明度和信任的需求。我们还概述了这些方法面临的挑战,并提出了未来的研究方向,以改进医疗保健领域的XAI。本文旨在弥合前沿计算技术与其在医疗保健中的实际应用之间的差距,促进在医疗环境中更透明、更可靠且更有效地使用人工智能。这些见解对研究和行业都具有指导意义,推动了医疗保健领域XAI实施的创新和标准化。