• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于神经元敏感性的深度神经网络对抗鲁棒性解释与改进。

Interpreting and Improving Adversarial Robustness of Deep Neural Networks With Neuron Sensitivity.

出版信息

IEEE Trans Image Process. 2021;30:1291-1304. doi: 10.1109/TIP.2020.3042083. Epub 2020 Dec 23.

DOI:10.1109/TIP.2020.3042083
PMID:33290221
Abstract

Deep neural networks (DNNs) are vulnerable to adversarial examples where inputs with imperceptible perturbations mislead DNNs to incorrect results. Despite the potential risk they bring, adversarial examples are also valuable for providing insights into the weakness and blind-spots of DNNs. Thus, the interpretability of a DNN in the adversarial setting aims to explain the rationale behind its decision-making process and makes deeper understanding which results in better practical applications. To address this issue, we try to explain adversarial robustness for deep models from a new perspective of neuron sensitivity which is measured by neuron behavior variation intensity against benign and adversarial examples. In this paper, we first draw the close connection between adversarial robustness and neuron sensitivities, as sensitive neurons make the most non-trivial contributions to model predictions in the adversarial setting. Based on that, we further propose to improve adversarial robustness by stabilizing the behaviors of sensitive neurons. Moreover, we demonstrate that state-of-the-art adversarial training methods improve model robustness by reducing neuron sensitivities, which in turn confirms the strong connections between adversarial robustness and neuron sensitivity. Extensive experiments on various datasets demonstrate that our algorithm effectively achieves excellent results. To the best of our knowledge, we are the first to study adversarial robustness using neuron sensitivities.

摘要

深度神经网络 (DNN) 容易受到对抗样本的影响,这些对抗样本中的输入只存在微小的扰动,但却会导致 DNN 给出错误的结果。尽管对抗样本存在潜在风险,但它们也为深入了解 DNN 的弱点和盲点提供了有价值的信息。因此,在对抗环境下,DNN 的可解释性旨在解释其决策过程的基本原理,并实现更深入的理解,从而带来更好的实际应用。为了解决这个问题,我们试图从神经元敏感性的新角度来解释深度模型的对抗鲁棒性,该敏感性通过神经元对良性和对抗样本的行为变化强度来衡量。在本文中,我们首先得出对抗鲁棒性与神经元敏感性之间的紧密联系,因为在对抗环境下,敏感神经元对模型预测做出了最有意义的贡献。在此基础上,我们进一步提出通过稳定敏感神经元的行为来提高对抗鲁棒性。此外,我们证明了最先进的对抗训练方法通过降低神经元敏感性来提高模型的鲁棒性,这反过来又证实了对抗鲁棒性和神经元敏感性之间的紧密联系。在各种数据集上的广泛实验表明,我们的算法能够有效地取得优异的结果。据我们所知,我们是第一个使用神经元敏感性来研究对抗鲁棒性的。

相似文献

1
Interpreting and Improving Adversarial Robustness of Deep Neural Networks With Neuron Sensitivity.基于神经元敏感性的深度神经网络对抗鲁棒性解释与改进。
IEEE Trans Image Process. 2021;30:1291-1304. doi: 10.1109/TIP.2020.3042083. Epub 2020 Dec 23.
2
Analyzing the Noise Robustness of Deep Neural Networks.分析深度神经网络的噪声鲁棒性
IEEE Trans Vis Comput Graph. 2021 Jul;27(7):3289-3304. doi: 10.1109/TVCG.2020.2969185. Epub 2021 May 27.
3
Increasing-Margin Adversarial (IMA) training to improve adversarial robustness of neural networks.基于增加间隔的对抗(IMA)训练来提高神经网络的对抗鲁棒性。
Comput Methods Programs Biomed. 2023 Oct;240:107687. doi: 10.1016/j.cmpb.2023.107687. Epub 2023 Jun 24.
4
Progressive Diversified Augmentation for General Robustness of DNNs: A Unified Approach.渐进多样化增强:提高 DNN 泛化鲁棒性的统一方法
IEEE Trans Image Process. 2021;30:8955-8967. doi: 10.1109/TIP.2021.3121150. Epub 2021 Oct 29.
5
Toward Intrinsic Adversarial Robustness Through Probabilistic Training.通过概率训练实现内在对抗鲁棒性。
IEEE Trans Image Process. 2023;32:3862-3872. doi: 10.1109/TIP.2023.3290532. Epub 2023 Jul 14.
6
Robust image classification against adversarial attacks using elastic similarity measures between edge count sequences.使用边缘计数序列之间的弹性相似性度量来进行对抗攻击的鲁棒图像分类。
Neural Netw. 2020 Aug;128:61-72. doi: 10.1016/j.neunet.2020.04.030. Epub 2020 Apr 30.
7
Benchmarking robustness of deep neural networks in semantic segmentation of fluorescence microscopy images.基准测试在荧光显微镜图像语义分割中深度神经网络的鲁棒性。
BMC Bioinformatics. 2024 Aug 20;25(1):269. doi: 10.1186/s12859-024-05894-4.
8
Enhancing adversarial defense for medical image analysis systems with pruning and attention mechanism.利用剪枝和注意力机制增强医学图像分析系统的对抗防御能力。
Med Phys. 2021 Oct;48(10):6198-6212. doi: 10.1002/mp.15208. Epub 2021 Sep 14.
9
Universal adversarial attacks on deep neural networks for medical image classification.针对医学图像分类的深度神经网络的通用对抗攻击。
BMC Med Imaging. 2021 Jan 7;21(1):9. doi: 10.1186/s12880-020-00530-y.
10
Improving Adversarial Robustness via Attention and Adversarial Logit Pairing.通过注意力机制和对抗性逻辑对配对提高对抗鲁棒性
Front Artif Intell. 2022 Jan 27;4:752831. doi: 10.3389/frai.2021.752831. eCollection 2021.