Suppr超能文献

用于增强兽医放射成像中可解释性的补充与替代医学(CAM)方法的比较评估。

Comparative evaluation of CAM methods for enhancing explainability in veterinary radiography.

作者信息

Dusza Piotr, Banzato Tommaso, Burti Silvia, Bendazzoli Margherita, Müller Henning, Wodzinski Marek

机构信息

AGH University of Krakow, Faculty of Electrical Engineering, Automatics, Computer Science and Biomedical Engineering, Krakow, 30059, Poland.

University of Applied Sciences Western Switzerland (HES-SO), Institute of Informatics, Sierre, 3960, Switzerland.

出版信息

Sci Rep. 2025 Aug 13;15(1):29690. doi: 10.1038/s41598-025-14060-6.

Abstract

Explainable Artificial Intelligence (XAI) encompasses a broad spectrum of methods that aim to enhance the transparency of deep learning models, with Class Activation Mapping (CAM) methods widely used for visual interpretability. However, systematic evaluations of these methods in veterinary radiography remain scarce. This study presents a comparative analysis of eleven CAM methods, including GradCAM, XGradCAM, ScoreCAM, and EigenCAM, on a dataset of 7362 canine and feline X-ray images. A ResNet18 model was chosen based on the specificity of the dataset and preliminary results where it outperformed other models. Quantitative and qualitative evaluations were performed to determine how well each CAM method produced interpretable heatmaps relevant to clinical decision-making. Among the techniques evaluated, EigenGradCAM achieved the highest mean score and standard deviation (SD) of 2.571 (SD = 1.256), closely followed by EigenCAM at 2.519 (SD = 1.228) and GradCAM++ at 2.512 (SD = 1.277), with methods such as FullGrad and XGradCAM achieving worst scores of 2.000 (SD = 1.300) and 1.858 (SD = 1.198) respectively. Despite variations in saliency visualization, no single method universally improved veterinarians' diagnostic confidence. While certain CAM methods provide better visual cues for some pathologies, they generally offered limited explainability and didn't substantially improve veterinarians' diagnostic confidence.

摘要

可解释人工智能(XAI)涵盖了广泛的方法,旨在提高深度学习模型的透明度,其中类激活映射(CAM)方法被广泛用于视觉可解释性。然而,在兽医放射学中对这些方法的系统评估仍然很少。本研究对7362张犬猫X光图像数据集上的11种CAM方法进行了比较分析,包括GradCAM、XGradCAM、ScoreCAM和EigenCAM。基于数据集的特异性和初步结果,选择了ResNet18模型,该模型优于其他模型。进行了定量和定性评估,以确定每种CAM方法生成与临床决策相关的可解释热图的效果如何。在评估的技术中,EigenGradCAM的平均得分和标准差最高,分别为2.571(标准差 = 1.256),其次是EigenCAM,为2.519(标准差 = 1.228),GradCAM++为2.512(标准差 = 1.277),而FullGrad和XGradCAM等方法的得分最差,分别为2.000(标准差 = 1.300)和1.858(标准差 = 1.198)。尽管显著性可视化存在差异,但没有一种方法能普遍提高兽医的诊断信心。虽然某些CAM方法对某些病理提供了更好的视觉线索,但它们通常提供的可解释性有限,并没有显著提高兽医的诊断信心。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/9f92d428d19e/41598_2025_14060_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验