• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

分组可视化分类判别特征。

Group visualization of class-discriminative features.

机构信息

Department of General Systems Studies, The University of Tokyo, Tokyo, Japan.

Department of General Systems Studies, The University of Tokyo, Tokyo, Japan.

出版信息

Neural Netw. 2020 Sep;129:75-90. doi: 10.1016/j.neunet.2020.05.026. Epub 2020 May 29.

DOI:10.1016/j.neunet.2020.05.026
PMID:32502799
Abstract

Research explaining the behavior of convolutional neural networks (CNNs) has gained a lot of attention over the past few years. Although many visualization methods have been proposed to explain network predictions, most fail to provide clear correlations between the target output and the features extracted by convolutional layers. In this work, we define a concept, i.e., class-discriminative feature groups, to specify features that are extracted by groups of convolutional kernels correlated with a particular image class. We propose a detection method to detect class-discriminative feature groups and a visualization method to highlight image regions correlated with particular output and to interpret class-discriminative feature groups intuitively. The experiments showed that the proposed method can disentangle features based on image classes and shed light on what feature groups are extracted from which regions of the image. We also applied this method to visualize "lost" features in adversarial samples and features in an image containing a non-class object to demonstrate its ability to debug why the network failed or succeeded.

摘要

近年来,研究解释卷积神经网络(CNN)行为的方法引起了广泛关注。尽管已经提出了许多可视化方法来解释网络预测,但大多数方法都无法提供目标输出与卷积层提取的特征之间的明确关联。在这项工作中,我们定义了一个概念,即类判别特征组,用于指定与特定图像类相关的卷积核组提取的特征。我们提出了一种检测方法来检测类判别特征组,并提出了一种可视化方法来突出与特定输出相关的图像区域,并直观地解释类判别特征组。实验表明,所提出的方法可以根据图像类分离特征,并揭示从图像的哪些区域提取了哪些特征组。我们还将该方法应用于可视化对抗样本中的“丢失”特征和包含非类对象的图像中的特征,以证明其调试网络失败或成功原因的能力。

相似文献

1
Group visualization of class-discriminative features.分组可视化分类判别特征。
Neural Netw. 2020 Sep;129:75-90. doi: 10.1016/j.neunet.2020.05.026. Epub 2020 May 29.
2
A novel feature representation: Aggregating convolution kernels for image retrieval.一种新颖的特征表示:聚合卷积核进行图像检索。
Neural Netw. 2020 Oct;130:1-10. doi: 10.1016/j.neunet.2020.06.010. Epub 2020 Jun 24.
3
Visualization Methods for Image Transformation Convolutional Neural Networks.图像变换卷积神经网络的可视化方法。
IEEE Trans Neural Netw Learn Syst. 2019 Jul;30(7):2231-2243. doi: 10.1109/TNNLS.2018.2881194. Epub 2018 Dec 11.
4
Dense Residual Network: Enhancing global dense feature flow for character recognition.密集残差网络:增强字符识别的全局密集特征流。
Neural Netw. 2021 Jul;139:77-85. doi: 10.1016/j.neunet.2021.02.005. Epub 2021 Feb 25.
5
Self-organized operational neural networks for severe image restoration problems.自组织操作型神经网络用于严重图像恢复问题。
Neural Netw. 2021 Mar;135:201-211. doi: 10.1016/j.neunet.2020.12.014. Epub 2020 Dec 23.
6
Perceptual Adversarial Networks for Image-to-Image Transformation.用于图像到图像转换的感知对抗网络。
IEEE Trans Image Process. 2018 Aug;27(8):4066-4079. doi: 10.1109/TIP.2018.2836316. Epub 2018 May 14.
7
Endoscopic Image Classification and Retrieval using Clustered Convolutional Features.基于聚类卷积特征的内镜图像分类与检索。
J Med Syst. 2017 Oct 30;41(12):196. doi: 10.1007/s10916-017-0836-y.
8
Attention-guided CNN for image denoising.注意引导卷积神经网络进行图像去噪。
Neural Netw. 2020 Apr;124:117-129. doi: 10.1016/j.neunet.2019.12.024. Epub 2020 Jan 7.
9
Fine-Tuning CNN Image Retrieval with No Human Annotation.无人工标注微调卷积神经网络图像检索。
IEEE Trans Pattern Anal Mach Intell. 2019 Jul;41(7):1655-1668. doi: 10.1109/TPAMI.2018.2846566. Epub 2018 Jun 12.
10
Image Object Recognition via Deep Feature-Based Adaptive Joint Sparse Representation.基于深度特征自适应联合稀疏表示的图像目标识别。
Comput Intell Neurosci. 2019 Nov 21;2019:8258275. doi: 10.1155/2019/8258275. eCollection 2019.