Saw Shier Nee, Yan Yet Yen, Ng Kwan Hoong
Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Universiti Malaya, Kuala Lumpur 50603, Malaysia.
Department of Radiology, Changi General Hospital, Singapore; Radiological Sciences ACP, Duke-NUS Medical School, Singapore; Present Address: Department of Diagnostic Radiology, Mount Elizabeth Hospital, 3 Mount Elizabeth, Singapore 228510, Republic of Singapore.
Eur J Radiol. 2025 Feb;183:111884. doi: 10.1016/j.ejrad.2024.111884. Epub 2024 Dec 6.
The inherent "black box" nature of AI algorithms presents a substantial barrier to the widespread adoption of the technology in clinical settings, leading to a lack of trust among users. This review begins by examining the foundational stages involved in the interpretation of medical images by radiologists and clinicians, encompassing both type 1 (fast thinking - ability of the brain to think and act intuitively) and type 2 (slow analytical - slow analytical, laborious approach to decision-making) decision-making processes. The discussion then delves into current Explainable AI (XAI) approaches, exploring both inherent and post-hoc explainability for medical imaging applications and highlighting the milestones achieved. XAI in medicine refers to AI system designed to provide transparent, interpretable, and understandable reasoning behind AI predictions or decisions. Additionally, the paper showcases some commercial AI medical systems that offer explanations through features such as heatmaps. Opportunities, challenges and potential avenues for advancing the field are also addressed. In conclusion, the review observes that state-of-the-art XAI methods are not mature enough for implementation, as the explanations they provide are challenging for medical experts to comprehend. Deeper understanding of the cognitive mechanisms by medical professionals is important in aiming to develop more interpretable XAI methods.
人工智能算法固有的“黑匣子”性质对该技术在临床环境中的广泛应用构成了重大障碍,导致用户缺乏信任。本综述首先考察放射科医生和临床医生解读医学图像所涉及的基础阶段,包括1型(快速思维——大脑直观思考和行动的能力)和2型(慢速分析——缓慢分析、费力的决策方法)决策过程。接着讨论深入探讨当前的可解释人工智能(XAI)方法,探索医学成像应用中的内在可解释性和事后可解释性,并突出所取得的里程碑。医学中的XAI是指旨在为人工智能预测或决策背后提供透明、可解释和易懂推理的人工智能系统。此外,本文展示了一些通过热图等功能提供解释的商业人工智能医疗系统。还讨论了推动该领域发展的机遇、挑战和潜在途径。总之,该综述指出,由于目前最先进的XAI方法所提供的解释对医学专家来说难以理解,因此还不够成熟,无法实施。医学专业人员对认知机制的更深入理解对于开发更具可解释性的XAI方法很重要。