Hafeez Yasir, Memon Khuhed, Al-Quraishi Maged S, Yahya Norashikin, Elferik Sami, Ali Syed Saad Azhar
Faculty of Science and Engineering, University of Nottingham, Jalan Broga, Semenyih 43500, Selangor Darul Ehsan, Malaysia.
Centre for Intelligent Signal and Imaging Research, Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Perak Darul Ridzuan, Malaysia.
Diagnostics (Basel). 2025 Jan 13;15(2):168. doi: 10.3390/diagnostics15020168.
Artificial intelligence (AI) has recently made unprecedented contributions in every walk of life, but it has not been able to work its way into diagnostic medicine and standard clinical practice yet. Although data scientists, researchers, and medical experts have been working in the direction of designing and developing computer aided diagnosis (CAD) tools to serve as assistants to doctors, their large-scale adoption and integration into the healthcare system still seems far-fetched. Diagnostic radiology is no exception. Imagining techniques like magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET) scans have been widely and very effectively employed by radiologists and neurologists for the differential diagnoses of neurological disorders for decades, yet no AI-powered systems to analyze such scans have been incorporated into the standard operating procedures of healthcare systems. Why? It is absolutely understandable that in diagnostic medicine, precious human lives are on the line, and hence there is no room even for the tiniest of mistakes. Nevertheless, with the advent of explainable artificial intelligence (XAI), the old-school black boxes of deep learning (DL) systems have been unraveled. Would XAI be the turning point for medical experts to finally embrace AI in diagnostic radiology? This review is a humble endeavor to find the answers to these questions. In this review, we present the journey and contributions of AI in developing systems to recognize, preprocess, and analyze brain MRI scans for differential diagnoses of various neurological disorders, with special emphasis on CAD systems embedded with explainability. A comprehensive review of the literature from 2017 to 2024 was conducted using host databases. We also present medical domain experts' opinions and summarize the challenges up ahead that need to be addressed in order to fully exploit the tremendous potential of XAI in its application to medical diagnostics and serve humanity. Forty-seven studies were summarized and tabulated with information about the XAI technology and datasets employed, along with performance accuracies. The strengths and weaknesses of the studies have also been discussed. In addition, the opinions of seven medical experts from around the world have been presented to guide engineers and data scientists in developing such CAD tools. Current CAD research was observed to be focused on the enhancement of the performance accuracies of the DL regimens, with less attention being paid to the authenticity and usefulness of explanations. A shortage of ground truth data for explainability was also observed. Visual explanation methods were found to dominate; however, they might not be enough, and more thorough and human professor-like explanations would be required to build the trust of healthcare professionals. Special attention to these factors along with the legal, ethical, safety, and security issues can bridge the current gap between XAI and routine clinical practice.
人工智能(AI)最近在各行各业都做出了前所未有的贡献,但它尚未能够进入诊断医学和标准临床实践领域。尽管数据科学家、研究人员和医学专家一直致力于设计和开发计算机辅助诊断(CAD)工具,以作为医生的助手,但这些工具的大规模采用并融入医疗保健系统似乎仍遥不可及。诊断放射学也不例外。几十年来,放射科医生和神经科医生广泛且非常有效地使用磁共振成像(MRI)、计算机断层扫描(CT)和正电子发射断层扫描(PET)等成像技术来鉴别诊断神经系统疾病,但尚未有分析此类扫描的人工智能驱动系统被纳入医疗保健系统的标准操作程序。为什么呢?在诊断医学中,宝贵的生命危在旦夕,因此即使是最微小的错误也没有容身之地,这是完全可以理解的。然而,随着可解释人工智能(XAI)的出现,深度学习(DL)系统的老式黑箱已经被揭开。XAI会成为医学专家最终在诊断放射学中接受AI的转折点吗?这篇综述旨在努力寻找这些问题的答案。在本综述中,我们介绍了AI在开发用于识别、预处理和分析脑部MRI扫描以鉴别诊断各种神经系统疾病的系统方面的历程和贡献,特别强调了具有可解释性的CAD系统。使用主要数据库对2017年至2024年的文献进行了全面综述。我们还展示了医学领域专家的意见,并总结了为充分发挥XAI在医学诊断应用中的巨大潜力并服务人类而需要解决的未来挑战。总结并列出了47项研究,包括所采用的XAI技术和数据集信息以及性能准确率。还讨论了这些研究的优点和缺点。此外,还展示了来自世界各地的七位医学专家的意见,以指导工程师和数据科学家开发此类CAD工具。目前的CAD研究主要集中在提高DL方案的性能准确率上,而对解释的真实性和实用性关注较少。还观察到缺乏用于可解释性的地面真值数据。发现视觉解释方法占主导地位;然而,它们可能还不够,需要更全面、更像人类教授的解释来建立医疗保健专业人员的信任。对这些因素以及法律、伦理、安全和保障问题给予特别关注,可以弥合当前XAI与常规临床实践之间的差距。