• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于增强兽医放射成像中可解释性的补充与替代医学(CAM)方法的比较评估。

Comparative evaluation of CAM methods for enhancing explainability in veterinary radiography.

作者信息

Dusza Piotr, Banzato Tommaso, Burti Silvia, Bendazzoli Margherita, Müller Henning, Wodzinski Marek

机构信息

AGH University of Krakow, Faculty of Electrical Engineering, Automatics, Computer Science and Biomedical Engineering, Krakow, 30059, Poland.

University of Applied Sciences Western Switzerland (HES-SO), Institute of Informatics, Sierre, 3960, Switzerland.

出版信息

Sci Rep. 2025 Aug 13;15(1):29690. doi: 10.1038/s41598-025-14060-6.

DOI:10.1038/s41598-025-14060-6
PMID:40804451
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12350829/
Abstract

Explainable Artificial Intelligence (XAI) encompasses a broad spectrum of methods that aim to enhance the transparency of deep learning models, with Class Activation Mapping (CAM) methods widely used for visual interpretability. However, systematic evaluations of these methods in veterinary radiography remain scarce. This study presents a comparative analysis of eleven CAM methods, including GradCAM, XGradCAM, ScoreCAM, and EigenCAM, on a dataset of 7362 canine and feline X-ray images. A ResNet18 model was chosen based on the specificity of the dataset and preliminary results where it outperformed other models. Quantitative and qualitative evaluations were performed to determine how well each CAM method produced interpretable heatmaps relevant to clinical decision-making. Among the techniques evaluated, EigenGradCAM achieved the highest mean score and standard deviation (SD) of 2.571 (SD = 1.256), closely followed by EigenCAM at 2.519 (SD = 1.228) and GradCAM++ at 2.512 (SD = 1.277), with methods such as FullGrad and XGradCAM achieving worst scores of 2.000 (SD = 1.300) and 1.858 (SD = 1.198) respectively. Despite variations in saliency visualization, no single method universally improved veterinarians' diagnostic confidence. While certain CAM methods provide better visual cues for some pathologies, they generally offered limited explainability and didn't substantially improve veterinarians' diagnostic confidence.

摘要

可解释人工智能(XAI)涵盖了广泛的方法,旨在提高深度学习模型的透明度,其中类激活映射(CAM)方法被广泛用于视觉可解释性。然而,在兽医放射学中对这些方法的系统评估仍然很少。本研究对7362张犬猫X光图像数据集上的11种CAM方法进行了比较分析,包括GradCAM、XGradCAM、ScoreCAM和EigenCAM。基于数据集的特异性和初步结果,选择了ResNet18模型,该模型优于其他模型。进行了定量和定性评估,以确定每种CAM方法生成与临床决策相关的可解释热图的效果如何。在评估的技术中,EigenGradCAM的平均得分和标准差最高,分别为2.571(标准差 = 1.256),其次是EigenCAM,为2.519(标准差 = 1.228),GradCAM++为2.512(标准差 = 1.277),而FullGrad和XGradCAM等方法的得分最差,分别为2.000(标准差 = 1.300)和1.858(标准差 = 1.198)。尽管显著性可视化存在差异,但没有一种方法能普遍提高兽医的诊断信心。虽然某些CAM方法对某些病理提供了更好的视觉线索,但它们通常提供的可解释性有限,并没有显著提高兽医的诊断信心。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/d80dd422940a/41598_2025_14060_Fig22_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/9f92d428d19e/41598_2025_14060_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/9d6e930025ab/41598_2025_14060_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/67e877a9d850/41598_2025_14060_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/0bea14ee5cb1/41598_2025_14060_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/6d54ba4f82ff/41598_2025_14060_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/f6be634685f4/41598_2025_14060_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/fe57666b9bb0/41598_2025_14060_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/86b198c23c58/41598_2025_14060_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/d9d61fa73871/41598_2025_14060_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/3840e4ffa823/41598_2025_14060_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/d9a38276054e/41598_2025_14060_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/5573cce97698/41598_2025_14060_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/5d24566a8872/41598_2025_14060_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/2d9b78bcac4a/41598_2025_14060_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/c1355afd1ebb/41598_2025_14060_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/88f9d7f64f55/41598_2025_14060_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/886ae79bacd8/41598_2025_14060_Fig17_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/3f83cebdbcaf/41598_2025_14060_Fig18_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/b5dfaffe0a70/41598_2025_14060_Fig19_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/d105a12dca61/41598_2025_14060_Fig20_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/ce0ae6e1e45e/41598_2025_14060_Fig21_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/d80dd422940a/41598_2025_14060_Fig22_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/9f92d428d19e/41598_2025_14060_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/9d6e930025ab/41598_2025_14060_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/67e877a9d850/41598_2025_14060_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/0bea14ee5cb1/41598_2025_14060_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/6d54ba4f82ff/41598_2025_14060_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/f6be634685f4/41598_2025_14060_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/fe57666b9bb0/41598_2025_14060_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/86b198c23c58/41598_2025_14060_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/d9d61fa73871/41598_2025_14060_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/3840e4ffa823/41598_2025_14060_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/d9a38276054e/41598_2025_14060_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/5573cce97698/41598_2025_14060_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/5d24566a8872/41598_2025_14060_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/2d9b78bcac4a/41598_2025_14060_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/c1355afd1ebb/41598_2025_14060_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/88f9d7f64f55/41598_2025_14060_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/886ae79bacd8/41598_2025_14060_Fig17_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/3f83cebdbcaf/41598_2025_14060_Fig18_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/b5dfaffe0a70/41598_2025_14060_Fig19_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/d105a12dca61/41598_2025_14060_Fig20_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/ce0ae6e1e45e/41598_2025_14060_Fig21_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6e31/12350829/d80dd422940a/41598_2025_14060_Fig22_HTML.jpg

相似文献

1
Comparative evaluation of CAM methods for enhancing explainability in veterinary radiography.用于增强兽医放射成像中可解释性的补充与替代医学(CAM)方法的比较评估。
Sci Rep. 2025 Aug 13;15(1):29690. doi: 10.1038/s41598-025-14060-6.
2
Analyzing explainability of YOLO-based breast cancer detection using heat map visualizations.使用热图可视化分析基于YOLO的乳腺癌检测的可解释性。
Quant Imaging Med Surg. 2025 Jul 1;15(7):6252-6271. doi: 10.21037/qims-2024-2911. Epub 2025 Jun 30.
3
CXR-MultiTaskNet a unified deep learning framework for joint disease localization and classification in chest radiographs.CXR-MultiTaskNet:一种用于胸部X光片中疾病联合定位与分类的统一深度学习框架。
Sci Rep. 2025 Aug 31;15(1):32022. doi: 10.1038/s41598-025-16669-z.
4
Prescription of Controlled Substances: Benefits and Risks管制药品的处方:益处与风险
5
Exploring the Applications of Explainability in Wearable Data Analytics: Systematic Literature Review.探索可解释性在可穿戴数据分析中的应用:系统文献综述
J Med Internet Res. 2024 Dec 24;26:e53863. doi: 10.2196/53863.
6
Novel Artificial Intelligence-Driven Infant Meningitis Screening From High-Resolution Ultrasound Imaging.基于高分辨率超声成像的新型人工智能驱动的婴儿脑膜炎筛查
Ultrasound Med Biol. 2025 Jun 28. doi: 10.1016/j.ultrasmedbio.2025.04.009.
7
Synergizing advanced algorithm of explainable artificial intelligence with hybrid model for enhanced brain tumor detection in healthcare.将可解释人工智能的先进算法与混合模型相结合,以增强医疗保健中脑肿瘤的检测。
Sci Rep. 2025 Jul 1;15(1):20489. doi: 10.1038/s41598-025-07524-2.
8
Deep Learning and Image Generator Health Tabular Data (IGHT) for Predicting Overall Survival in Patients With Colorectal Cancer: Retrospective Study.深度学习与图像生成器健康表格数据(IGHT)用于预测结直肠癌患者的总生存期:回顾性研究
JMIR Med Inform. 2025 Aug 19;13:e75022. doi: 10.2196/75022.
9
Survey study based on the assessment and management of pain in cats by veterinary professionals after elective sterilization procedures.基于兽医专业人员对择期绝育手术后猫的疼痛评估与管理的调查研究。
J Feline Med Surg. 2025 Aug;27(8):1098612X251347156. doi: 10.1177/1098612X251347156. Epub 2025 Aug 23.
10
Systematic literature review on the application of explainable artificial intelligence in palliative care studies.关于可解释人工智能在姑息治疗研究中应用的系统文献综述。
Int J Med Inform. 2025 Aug;200:105914. doi: 10.1016/j.ijmedinf.2025.105914. Epub 2025 Apr 8.

本文引用的文献

1
Development and optimization of AI algorithms for wrist fracture detection in children using a freely available dataset.利用一个免费数据集开发并优化用于儿童手腕骨折检测的人工智能算法。
Front Pediatr. 2023 Dec 21;11:1291804. doi: 10.3389/fped.2023.1291804. eCollection 2023.
2
Artificial intelligence feasibility in veterinary medicine: A systematic review.人工智能在兽医学中的可行性:一项系统综述。
Vet World. 2023 Oct;16(10):2143-2149. doi: 10.14202/vetworld.2023.2143-2149. Epub 2023 Oct 21.
3
Computer image analysis with artificial intelligence: a practical introduction to convolutional neural networks for medical professionals.
计算机人工智能图像分析:医学专业人员实用的卷积神经网络入门。
Postgrad Med J. 2023 Nov 20;99(1178):1287-1294. doi: 10.1093/postmj/qgad095.
4
1D Gradient-Weighted Class Activation Mapping, Visualizing Decision Process of Convolutional Neural Network-Based Models in Spectroscopy Analysis.1D 梯度加权类激活映射,可视化基于卷积神经网络模型在光谱分析中的决策过程。
Anal Chem. 2023 Jul 4;95(26):9959-9966. doi: 10.1021/acs.analchem.3c01101. Epub 2023 Jun 23.
5
Artificial Intelligence for Medical Diagnostics-Existing and Future AI Technology!用于医学诊断的人工智能——现有的和未来的人工智能技术!
Diagnostics (Basel). 2023 Feb 12;13(4):688. doi: 10.3390/diagnostics13040688.
6
Artificial intelligence in veterinary diagnostic imaging: A literature review.兽医诊断成像中的人工智能:文献综述
Vet Radiol Ultrasound. 2022 Dec;63 Suppl 1:851-870. doi: 10.1111/vru.13163. Epub 2022 Dec 5.
7
Managing bottlenecks in the perioperative setting: Optimizing patient care and reducing costs.应对围手术期的瓶颈问题:优化患者护理并降低成本。
Best Pract Res Clin Anaesthesiol. 2022 Aug;36(2):299-310. doi: 10.1016/j.bpa.2022.05.005. Epub 2022 Jun 3.
8
Artificial intelligence enabled automated diagnosis and grading of ulcerative colitis endoscopy images.人工智能实现溃疡性结肠炎内镜图像的自动诊断与分级。
Sci Rep. 2022 Feb 17;12(1):2748. doi: 10.1038/s41598-022-06726-2.
9
Establishment of the glioma polyploid giant cancer cell model by a modified PHA-DMSO-PEG fusion method following dual drug-fluorescence screening in vitro.通过改良PHA-DMSO-PEG融合方法在体外双药荧光筛选后建立胶质瘤多倍体巨癌细胞模型。
J Neurosci Methods. 2022 Feb 15;368:109462. doi: 10.1016/j.jneumeth.2021.109462. Epub 2021 Dec 28.
10
Explainable Deep Learning Models in Medical Image Analysis.医学图像分析中的可解释深度学习模型
J Imaging. 2020 Jun 20;6(6):52. doi: 10.3390/jimaging6060052.