• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

重新思考梯度权重对显著图估计的影响。

Rethinking Gradient Weight's Influence over Saliency Map Estimation.

机构信息

Department of Computer Engineering, Chosun University, Gwangju 61452, Korea.

出版信息

Sensors (Basel). 2022 Aug 29;22(17):6516. doi: 10.3390/s22176516.

DOI:10.3390/s22176516
PMID:36080974
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9460162/
Abstract

Class activation map (CAM) helps to formulate saliency maps that aid in interpreting the deep neural network's prediction. Gradient-based methods are generally faster than other branches of vision interpretability and independent of human guidance. The performance of CAM-like studies depends on the governing model's layer response and the influences of the gradients. Typical gradient-oriented CAM studies rely on weighted aggregation for saliency map estimation by projecting the gradient maps into single-weight values, which may lead to an over-generalized saliency map. To address this issue, we use a global guidance map to rectify the weighted aggregation operation during saliency estimation, where resultant interpretations are comparatively cleaner and instance-specific. We obtain the global guidance map by performing elementwise multiplication between the feature maps and their corresponding gradient maps. To validate our study, we compare the proposed study with nine different saliency visualizers. In addition, we use seven commonly used evaluation metrics for quantitative comparison. The proposed scheme achieves significant improvement over the test images from the ImageNet, MS-COCO 14, and PASCAL VOC 2012 datasets.

摘要

类激活图 (CAM) 有助于构建显著图,以辅助解释深度神经网络的预测。基于梯度的方法通常比其他视觉可解释性分支更快,并且不需要人为指导。CAM 类研究的性能取决于主导模型的层响应和梯度的影响。典型的基于梯度的 CAM 研究依赖于加权聚合来通过将梯度图投影到单个权重值来估计显著图,这可能导致显著图过度泛化。为了解决这个问题,我们在显著图估计过程中使用全局指导图来纠正加权聚合操作,从而得到更干净和特定于实例的解释。我们通过对特征图与其相应的梯度图进行逐元素乘法来获得全局指导图。为了验证我们的研究,我们将提出的研究与九种不同的显著可视化工具进行了比较。此外,我们还使用了七种常用的评估指标进行定量比较。该方案在来自 ImageNet、MS-COCO 14 和 PASCAL VOC 2012 数据集的测试图像上取得了显著的改进。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5aa/9460162/c760c6b07677/sensors-22-06516-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5aa/9460162/482d44452982/sensors-22-06516-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5aa/9460162/b3025bb5d77d/sensors-22-06516-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5aa/9460162/5aa27b2a1dc6/sensors-22-06516-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5aa/9460162/2ad2d38fedb8/sensors-22-06516-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5aa/9460162/f4753dd769e0/sensors-22-06516-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5aa/9460162/c760c6b07677/sensors-22-06516-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5aa/9460162/482d44452982/sensors-22-06516-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5aa/9460162/b3025bb5d77d/sensors-22-06516-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5aa/9460162/5aa27b2a1dc6/sensors-22-06516-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5aa/9460162/2ad2d38fedb8/sensors-22-06516-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5aa/9460162/f4753dd769e0/sensors-22-06516-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5aa/9460162/c760c6b07677/sensors-22-06516-g006.jpg

相似文献

1
Rethinking Gradient Weight's Influence over Saliency Map Estimation.重新思考梯度权重对显著图估计的影响。
Sensors (Basel). 2022 Aug 29;22(17):6516. doi: 10.3390/s22176516.
2
Human attention guided explainable artificial intelligence for computer vision models.人类注意力引导的计算机视觉模型可解释人工智能。
Neural Netw. 2024 Sep;177:106392. doi: 10.1016/j.neunet.2024.106392. Epub 2024 May 15.
3
Quantitative evaluation of Saliency-Based Explainable artificial intelligence (XAI) methods in Deep Learning-Based mammogram analysis.基于显著性的可解释人工智能(XAI)方法在基于深度学习的乳房X光片分析中的定量评估。
Eur J Radiol. 2024 Apr;173:111356. doi: 10.1016/j.ejrad.2024.111356. Epub 2024 Feb 5.
4
TSGB: Target-Selective Gradient Backprop for Probing CNN Visual Saliency.TSGB:用于探测 CNN 视觉显著性的目标选择梯度反向传播。
IEEE Trans Image Process. 2022;31:2529-2540. doi: 10.1109/TIP.2022.3157149. Epub 2022 Mar 21.
5
Machine-Learning-Enabled Diagnostics with Improved Visualization of Disease Lesions in Chest X-ray Images.通过改进胸部X光图像中疾病病变的可视化实现基于机器学习的诊断。
Diagnostics (Basel). 2024 Aug 6;14(16):1699. doi: 10.3390/diagnostics14161699.
6
Benchmarking Perturbation-Based Saliency Maps for Explaining Atari Agents.用于解释雅达利游戏智能体的基于扰动的显著图基准测试
Front Artif Intell. 2022 Jul 13;5:903875. doi: 10.3389/frai.2022.903875. eCollection 2022.
7
Saliency map-guided hierarchical dense feature aggregation framework for breast lesion classification using ultrasound image.基于显著图引导的层次密集特征聚合框架的超声图像乳腺病变分类方法
Comput Methods Programs Biomed. 2022 Mar;215:106612. doi: 10.1016/j.cmpb.2021.106612. Epub 2021 Dec 31.
8
AD-CAM: Enhancing Interpretability of Convolutional Neural Networks with a Lightweight Framework - From Black Box to Glass Box.AD-CAM:用轻量级框架增强卷积神经网络的可解释性——从黑箱到白箱
IEEE J Biomed Health Inform. 2023 Nov 1;PP. doi: 10.1109/JBHI.2023.3329231.
9
Clinical usability of deep learning-based saliency maps for occlusion myocardial infarction identification from the prehospital 12-Lead electrocardiogram.基于深度学习的显著性图在院前12导联心电图识别闭塞性心肌梗死中的临床可用性。
J Electrocardiol. 2024 Nov-Dec;87:153792. doi: 10.1016/j.jelectrocard.2024.153792. Epub 2024 Sep 2.
10
What Does Deep Learning See? Insights From a Classifier Trained to Predict Contrast Enhancement Phase From CT Images.深度学习所见:从 CT 图像预测对比增强阶段的分类器训练所得到的见解。
AJR Am J Roentgenol. 2018 Dec;211(6):1184-1193. doi: 10.2214/AJR.18.20331. Epub 2018 Nov 7.

本文引用的文献

1
CNN Fixations: An Unraveling Approach to Visualize the Discriminative Image Regions.CNN 注视点:一种可视化判别性图像区域的方法。
IEEE Trans Image Process. 2019 May;28(5):2116-2125. doi: 10.1109/TIP.2018.2881920. Epub 2018 Nov 16.
2
On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation.关于通过逐层相关性传播对非线性分类器决策进行逐像素解释
PLoS One. 2015 Jul 10;10(7):e0130140. doi: 10.1371/journal.pone.0130140. eCollection 2015.