• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

深度视觉识别(DeepVID):通过知识蒸馏实现图像分类器的深度视觉解释与诊断

DeepVID: Deep Visual Interpretation and Diagnosis for Image Classifiers via Knowledge Distillation.

作者信息

Wang Junpeng, Gou Liang, Zhang Wei, Yang Hao, Shen Han-Wei

出版信息

IEEE Trans Vis Comput Graph. 2019 Jun;25(6):2168-2180. doi: 10.1109/TVCG.2019.2903943. Epub 2019 Mar 15.

DOI:10.1109/TVCG.2019.2903943
PMID:30892211
Abstract

Deep Neural Networks (DNNs) have been extensively used in multiple disciplines due to their superior performance. However, in most cases, DNNs are considered as black-boxes and the interpretation of their internal working mechanism is usually challenging. Given that model trust is often built on the understanding of how a model works, the interpretation of DNNs becomes more important, especially in safety-critical applications (e.g., medical diagnosis, autonomous driving). In this paper, we propose DeepVID, a Deep learning approach to Visually Interpret and Diagnose DNN models, especially image classifiers. In detail, we train a small locally-faithful model to mimic the behavior of an original cumbersome DNN around a particular data instance of interest, and the local model is sufficiently simple such that it can be visually interpreted (e.g., a linear model). Knowledge distillation is used to transfer the knowledge from the cumbersome DNN to the small model, and a deep generative model (i.e., variational auto-encoder) is used to generate neighbors around the instance of interest. Those neighbors, which come with small feature variances and semantic meanings, can effectively probe the DNN's behaviors around the interested instance and help the small model to learn those behaviors. Through comprehensive evaluations, as well as case studies conducted together with deep learning experts, we validate the effectiveness of DeepVID.

摘要

深度神经网络(DNN)由于其卓越的性能已在多个学科中得到广泛应用。然而,在大多数情况下,DNN被视为黑箱,对其内部工作机制的解释通常具有挑战性。鉴于模型信任通常建立在对模型工作方式的理解之上,DNN的解释变得更加重要,尤其是在安全关键型应用(如医学诊断、自动驾驶)中。在本文中,我们提出了DeepVID,一种用于视觉解释和诊断DNN模型(特别是图像分类器)的深度学习方法。具体而言,我们训练一个小型的局部忠实模型,以模仿原始复杂DNN在特定感兴趣数据实例周围的行为,并且该局部模型足够简单,以至于可以进行视觉解释(例如线性模型)。知识蒸馏用于将知识从复杂的DNN转移到小型模型,并且使用深度生成模型(即变分自编码器)在感兴趣的实例周围生成邻居。那些具有小特征方差和语义含义的邻居可以有效地探究DNN在感兴趣实例周围的行为,并帮助小型模型学习这些行为。通过全面评估以及与深度学习专家一起进行的案例研究,我们验证了DeepVID的有效性。

相似文献

1
DeepVID: Deep Visual Interpretation and Diagnosis for Image Classifiers via Knowledge Distillation.深度视觉识别(DeepVID):通过知识蒸馏实现图像分类器的深度视觉解释与诊断
IEEE Trans Vis Comput Graph. 2019 Jun;25(6):2168-2180. doi: 10.1109/TVCG.2019.2903943. Epub 2019 Mar 15.
2
Extracting and inserting knowledge into stacked denoising auto-encoders.从堆叠去噪自编码器中提取和插入知识。
Neural Netw. 2021 May;137:31-42. doi: 10.1016/j.neunet.2021.01.010. Epub 2021 Jan 20.
3
Autoencoder and restricted Boltzmann machine for transfer learning in functional magnetic resonance imaging task classification.用于功能磁共振成像任务分类中迁移学习的自动编码器和受限玻尔兹曼机
Heliyon. 2023 Jul 16;9(7):e18086. doi: 10.1016/j.heliyon.2023.e18086. eCollection 2023 Jul.
4
RobustMap: Visual Exploration of DNN Adversarial Robustness in Generative Latent Space.RobustMap:生成性潜在空间中深度神经网络对抗鲁棒性的可视化探索
IEEE Trans Vis Comput Graph. 2025 Sep;31(9):5801-5815. doi: 10.1109/TVCG.2024.3471551.
5
Counterfactual Explanation of Brain Activity Classifiers Using Image-To-Image Transfer by Generative Adversarial Network.使用生成对抗网络进行图像到图像转换的脑活动分类器的反事实解释
Front Neuroinform. 2022 Mar 16;15:802938. doi: 10.3389/fninf.2021.802938. eCollection 2021.
6
Visual Genealogy of Deep Neural Networks.深度神经网络的可视化族谱。
IEEE Trans Vis Comput Graph. 2020 Nov;26(11):3340-3352. doi: 10.1109/TVCG.2019.2921323. Epub 2019 Jun 6.
7
Quantifying the Knowledge in a DNN to Explain Knowledge Distillation for Classification.量化深度神经网络中的知识以解释用于分类的知识蒸馏。
IEEE Trans Pattern Anal Mach Intell. 2023 Apr;45(4):5099-5113. doi: 10.1109/TPAMI.2022.3200344. Epub 2023 Mar 7.
8
DNNBrain: A Unifying Toolbox for Mapping Deep Neural Networks and Brains.DNNBrain:用于映射深度神经网络与大脑的统一工具箱。
Front Comput Neurosci. 2020 Nov 30;14:580632. doi: 10.3389/fncom.2020.580632. eCollection 2020.
9
Information Entropy Measures for Evaluation of Reliability of Deep Neural Network Results.用于评估深度神经网络结果可靠性的信息熵度量
Entropy (Basel). 2023 Mar 27;25(4):573. doi: 10.3390/e25040573.
10
TNT: An Interpretable Tree-Network-Tree Learning Framework using Knowledge Distillation.TNT:一种使用知识蒸馏的可解释树-网络-树学习框架。
Entropy (Basel). 2020 Oct 24;22(11):1203. doi: 10.3390/e22111203.

引用本文的文献

1
MiMICRI: Towards Domain-centered Counterfactual Explanations of Cardiovascular Image Classification Models.MiMICRI:迈向心血管图像分类模型以领域为中心的反事实解释
FACCT 24 (2024). 2024 Jun;2024:1861-1874. doi: 10.1145/3630106.3659011. Epub 2024 Jun 5.
2
UsbVisdaNet: User Behavior Visual Distillation and Attention Network for Multimodal Sentiment Classification.UsbVisdaNet:用于多模态情感分类的用户行为可视化提取和注意力网络。
Sensors (Basel). 2023 May 17;23(10):4829. doi: 10.3390/s23104829.
3
Mitigating carbon footprint for knowledge distillation based deep learning model compression.
减轻基于知识蒸馏的深度学习模型压缩的碳足迹。
PLoS One. 2023 May 15;18(5):e0285668. doi: 10.1371/journal.pone.0285668. eCollection 2023.
4
A lightweight deep neural network with higher accuracy.一种具有更高精度的轻量级深度神经网络。
PLoS One. 2022 Aug 2;17(8):e0271225. doi: 10.1371/journal.pone.0271225. eCollection 2022.