Vieira Cleverson, Rocha Leonardo, Guimarães Marcelo, Dias Diego
Computer Science Department/Federal University of São João del-Rei - UFSJ, São João del- Rei, MG, Brazil.
Federal University of São Paulo - UNIFESP, Osasco, SP, Brazil.
Comput Biol Med. 2025 Feb;185:109556. doi: 10.1016/j.compbiomed.2024.109556. Epub 2024 Dec 19.
Machine learning models are widely applied across diverse fields, including nearly all segments of human activity. In healthcare, artificial intelligence techniques have revolutionized disease diagnosis, particularly in image classification. Although these models have achieved significant results, their lack of explainability has limited widespread adoption in clinical practice. In medical environments, understanding AI model decisions is essential not only for healthcare professionals' trust but also for regulatory compliance, patient safety, and accountability in case of failures. Glaucoma, a neurodegenerative eye disease, can lead to irreversible blindness, making early detection crucial for preventing vision loss. Automated glaucoma detection has been a focus of intensive research in computer vision, with numerous studies proposing the use of convolutional neural networks (CNNs) to analyze retinal fundus images and diagnose the disease automatically. However, these models often lack the necessary explainability, which is essential for ophthalmologists to understand and justify their decisions to patients. This paper explores and applies explainable artificial intelligence (XAI) techniques to different CNN architectures for glaucoma classification, comparing which explanation technique offers the best interpretive resources for clinical diagnosis. We propose a new approach, SCIM (SHAP-CAM Interpretable Mapping), which has shown promising results. The experiments were conducted with an ophthalmology specialist who highlighted that CAM-based interpretability, applied to the VGG16 and VGG19 architectures, stands out as the most effective resource for promoting interpretability and supporting diagnosis.
机器学习模型广泛应用于各个领域,包括人类活动的几乎所有领域。在医疗保健领域,人工智能技术已经彻底改变了疾病诊断,尤其是在图像分类方面。尽管这些模型已经取得了显著成果,但它们缺乏可解释性限制了其在临床实践中的广泛应用。在医疗环境中,理解人工智能模型的决策不仅对于医疗保健专业人员的信任至关重要,而且对于监管合规、患者安全以及出现故障时的问责制也至关重要。青光眼是一种神经退行性眼病,可导致不可逆转的失明,因此早期检测对于预防视力丧失至关重要。自动青光眼检测一直是计算机视觉领域深入研究的重点,许多研究提出使用卷积神经网络(CNN)来分析视网膜眼底图像并自动诊断疾病。然而,这些模型往往缺乏必要的可解释性,而这对于眼科医生向患者解释和证明其决策至关重要。本文探索并将可解释人工智能(XAI)技术应用于不同的CNN架构以进行青光眼分类,比较哪种解释技术为临床诊断提供最佳的解释资源。我们提出了一种新方法,即SCIM(SHAP-CAM可解释映射),它已显示出有希望的结果。实验是与一位眼科专家进行的,该专家强调,应用于VGG16和VGG19架构的基于CAM的可解释性是促进可解释性和支持诊断的最有效资源。