• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

渐进多样化增强:提高 DNN 泛化鲁棒性的统一方法

Progressive Diversified Augmentation for General Robustness of DNNs: A Unified Approach.

出版信息

IEEE Trans Image Process. 2021;30:8955-8967. doi: 10.1109/TIP.2021.3121150. Epub 2021 Oct 29.

DOI:10.1109/TIP.2021.3121150
PMID:34699360
Abstract

Adversarial images are imperceptible perturbations to mislead deep neural networks (DNNs), which have attracted great attention in recent years. Although several defense strategies achieved encouraging robustness against adversarial samples, most of them still failed to consider the robustness on common corruptions (e.g. noise, blur, and weather/digital effects). To address this problem, we propose a simple yet effective method, named Progressive Diversified Augmentation (PDA), which improves the robustness of DNNs by progressively injecting diverse adversarial noises during training. In other words, DNNs trained with PDA achieve better general robustness against both adversarial attacks and common corruptions than other strategies. In addition, PDA also enjoys the advantages of spending less training time and keeping high standard accuracy on clean examples. Further, we theoretically prove that PDA can control the perturbation bound and guarantee better robustness. Extensive results on CIFAR-10, SVHN, ImageNet, CIFAR-10-C and ImageNet-C have demonstrated that PDA comprehensively outperforms its counterparts on the robustness against adversarial examples and common corruptions as well as clean images. More experiments on the frequency-based perturbations and visualized gradients further prove that PDA achieves general robustness and is more aligned with the human visual system.

摘要

对抗图像是一种难以察觉的扰动,可以误导深度神经网络(DNN),近年来引起了广泛关注。尽管已经有几种防御策略可以提高对对抗样本的鲁棒性,但大多数策略仍然没有考虑对常见失真(例如噪声、模糊和天气/数字效果)的鲁棒性。为了解决这个问题,我们提出了一种简单而有效的方法,称为渐进多样化增强(PDA),它通过在训练过程中逐步注入多样化的对抗噪声来提高 DNN 的鲁棒性。换句话说,使用 PDA 训练的 DNN 比其他策略在对抗攻击和常见失真方面具有更好的整体鲁棒性。此外,PDA 还具有训练时间短和保持干净示例高精度的优点。此外,我们从理论上证明了 PDA 可以控制扰动边界并保证更好的鲁棒性。在 CIFAR-10、SVHN、ImageNet、CIFAR-10-C 和 ImageNet-C 上的大量实验结果表明,PDA 在对抗样本和常见失真以及干净图像的鲁棒性方面全面优于其对手。基于频率的扰动和可视化梯度的进一步实验证明,PDA 实现了一般的鲁棒性,并且更符合人类视觉系统。

相似文献

1
Progressive Diversified Augmentation for General Robustness of DNNs: A Unified Approach.渐进多样化增强:提高 DNN 泛化鲁棒性的统一方法
IEEE Trans Image Process. 2021;30:8955-8967. doi: 10.1109/TIP.2021.3121150. Epub 2021 Oct 29.
2
Between-Class Adversarial Training for Improving Adversarial Robustness of Image Classification.基于类间对抗训练提高图像分类对抗鲁棒性。
Sensors (Basel). 2023 Mar 20;23(6):3252. doi: 10.3390/s23063252.
3
Feature Distillation in Deep Attention Network Against Adversarial Examples.深度注意力网络中针对对抗样本的特征蒸馏
IEEE Trans Neural Netw Learn Syst. 2023 Jul;34(7):3691-3705. doi: 10.1109/TNNLS.2021.3113342. Epub 2023 Jul 6.
4
Adversarial parameter defense by multi-step risk minimization.对抗参数防御的多步风险最小化。
Neural Netw. 2021 Dec;144:154-163. doi: 10.1016/j.neunet.2021.08.022. Epub 2021 Aug 25.
5
Interpreting and Improving Adversarial Robustness of Deep Neural Networks With Neuron Sensitivity.基于神经元敏感性的深度神经网络对抗鲁棒性解释与改进。
IEEE Trans Image Process. 2021;30:1291-1304. doi: 10.1109/TIP.2020.3042083. Epub 2020 Dec 23.
6
Increasing-Margin Adversarial (IMA) training to improve adversarial robustness of neural networks.基于增加间隔的对抗(IMA)训练来提高神经网络的对抗鲁棒性。
Comput Methods Programs Biomed. 2023 Oct;240:107687. doi: 10.1016/j.cmpb.2023.107687. Epub 2023 Jun 24.
7
Perturbation diversity certificates robust generalization.摄动多样性证书保证了强健的泛化能力。
Neural Netw. 2024 Apr;172:106117. doi: 10.1016/j.neunet.2024.106117. Epub 2024 Jan 8.
8
Training Robust Deep Neural Networks via Adversarial Noise Propagation.通过对抗噪声传播训练稳健的深度神经网络。
IEEE Trans Image Process. 2021;30:5769-5781. doi: 10.1109/TIP.2021.3082317.
9
A regularization method to improve adversarial robustness of neural networks for ECG signal classification.一种提高神经网络对抗鲁棒性的正则化方法,用于 ECG 信号分类。
Comput Biol Med. 2022 May;144:105345. doi: 10.1016/j.compbiomed.2022.105345. Epub 2022 Feb 24.
10
Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet.通用对抗性攻击对注意力的影响及由此产生的数据集 DAmageNet。
IEEE Trans Pattern Anal Mach Intell. 2022 Apr;44(4):2188-2197. doi: 10.1109/TPAMI.2020.3033291. Epub 2022 Mar 4.