• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

分析深度神经网络的噪声鲁棒性

Analyzing the Noise Robustness of Deep Neural Networks.

作者信息

Cao Kelei, Liu Mengchen, Su Hang, Wu Jing, Zhu Jun, Liu Shixia

出版信息

IEEE Trans Vis Comput Graph. 2021 Jul;27(7):3289-3304. doi: 10.1109/TVCG.2020.2969185. Epub 2021 May 27.

DOI:10.1109/TVCG.2020.2969185
PMID:31985427
Abstract

Adversarial examples, generated by adding small but intentionally imperceptible perturbations to normal examples, can mislead deep neural networks (DNNs) to make incorrect predictions. Although much work has been done on both adversarial attack and defense, a fine-grained understanding of adversarial examples is still lacking. To address this issue, we present a visual analysis method to explain why adversarial examples are misclassified. The key is to compare and analyze the datapaths of both the adversarial and normal examples. A datapath is a group of critical neurons along with their connections. We formulate the datapath extraction as a subset selection problem and solve it by constructing and training a neural network. A multi-level visualization consisting of a network-level visualization of data flows, a layer-level visualization of feature maps, and a neuron-level visualization of learned features, has been designed to help investigate how datapaths of adversarial and normal examples diverge and merge in the prediction process. A quantitative evaluation and a case study were conducted to demonstrate the promise of our method to explain the misclassification of adversarial examples.

摘要

对抗样本是通过向正常样本添加微小但故意难以察觉的扰动而生成的,它会误导深度神经网络(DNN)做出错误的预测。尽管在对抗攻击和防御方面已经做了很多工作,但对对抗样本仍缺乏细粒度的理解。为了解决这个问题,我们提出了一种可视化分析方法来解释对抗样本为何被误分类。关键在于比较和分析对抗样本与正常样本的数据路径。数据路径是一组关键神经元及其连接。我们将数据路径提取表述为一个子集选择问题,并通过构建和训练神经网络来解决它。设计了一个多层次可视化,包括数据流的网络级可视化、特征图的层级别可视化以及学习特征的神经元级可视化,以帮助研究对抗样本和正常样本的数据路径在预测过程中是如何发散和合并的。进行了定量评估和案例研究,以证明我们的方法在解释对抗样本误分类方面的前景。

相似文献

1
Analyzing the Noise Robustness of Deep Neural Networks.分析深度神经网络的噪声鲁棒性
IEEE Trans Vis Comput Graph. 2021 Jul;27(7):3289-3304. doi: 10.1109/TVCG.2020.2969185. Epub 2021 May 27.
2
Interpreting and Improving Adversarial Robustness of Deep Neural Networks With Neuron Sensitivity.基于神经元敏感性的深度神经网络对抗鲁棒性解释与改进。
IEEE Trans Image Process. 2021;30:1291-1304. doi: 10.1109/TIP.2020.3042083. Epub 2020 Dec 23.
3
Attention distraction with gradient sharpening for multi-task adversarial attack.用于多任务对抗攻击的梯度锐化注意力干扰
Math Biosci Eng. 2023 Jun 14;20(8):13562-13580. doi: 10.3934/mbe.2023605.
4
Towards evaluating the robustness of deep diagnostic models by adversarial attack.通过对抗攻击评估深度诊断模型的稳健性。
Med Image Anal. 2021 Apr;69:101977. doi: 10.1016/j.media.2021.101977. Epub 2021 Jan 22.
5
Progressive Diversified Augmentation for General Robustness of DNNs: A Unified Approach.渐进多样化增强:提高 DNN 泛化鲁棒性的统一方法
IEEE Trans Image Process. 2021;30:8955-8967. doi: 10.1109/TIP.2021.3121150. Epub 2021 Oct 29.
6
Adversarial Examples: Attacks and Defenses for Deep Learning.对抗样本:深度学习的攻击与防御。
IEEE Trans Neural Netw Learn Syst. 2019 Sep;30(9):2805-2824. doi: 10.1109/TNNLS.2018.2886017. Epub 2019 Jan 14.
7
Adversarial parameter defense by multi-step risk minimization.对抗参数防御的多步风险最小化。
Neural Netw. 2021 Dec;144:154-163. doi: 10.1016/j.neunet.2021.08.022. Epub 2021 Aug 25.
8
DualFlow: Generating imperceptible adversarial examples by flow field and normalize flow-based model.双流:通过流场和基于归一化流的模型生成不可察觉的对抗样本。
Front Neurorobot. 2023 Feb 9;17:1129720. doi: 10.3389/fnbot.2023.1129720. eCollection 2023.
9
Towards improving fast adversarial training in multi-exit network.针对多出口网络中快速对抗训练的改进。
Neural Netw. 2022 Jun;150:1-11. doi: 10.1016/j.neunet.2022.02.015. Epub 2022 Feb 25.
10
Feature Distillation in Deep Attention Network Against Adversarial Examples.深度注意力网络中针对对抗样本的特征蒸馏
IEEE Trans Neural Netw Learn Syst. 2023 Jul;34(7):3691-3705. doi: 10.1109/TNNLS.2021.3113342. Epub 2023 Jul 6.

引用本文的文献

1
Uncertainty estimation in female pelvic synthetic computed tomography generated from iterative reconstructed cone-beam computed tomography.基于迭代重建锥形束计算机断层扫描生成的女性盆腔合成计算机断层扫描中的不确定性估计
Phys Imaging Radiat Oncol. 2025 Mar 5;33:100743. doi: 10.1016/j.phro.2025.100743. eCollection 2025 Jan.
2
Visual Analytics for Efficient Image Exploration and User-Guided Image Captioning.用于高效图像探索和用户引导式图像字幕生成的视觉分析
IEEE Trans Vis Comput Graph. 2024 Jun;30(6):2875-2887. doi: 10.1109/TVCG.2024.3388514. Epub 2024 Jun 19.
3
RoMIA: a framework for creating Robust Medical Imaging AI models for chest radiographs.
RoMIA:一个用于创建针对胸部X光片的稳健医学影像人工智能模型的框架。
Front Radiol. 2024 Jan 8;3:1274273. doi: 10.3389/fradi.2023.1274273. eCollection 2023.
4
[Performance of low-dose CT image reconstruction for detecting intracerebral hemorrhage: selection of dose, algorithms and their combinations].[低剂量CT图像重建用于检测脑出血的性能:剂量选择、算法及其组合]
Nan Fang Yi Ke Da Xue Xue Bao. 2022 Feb 20;42(2):223-231. doi: 10.12122/j.issn.1673-4254.2022.02.08.