• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于自适应步长的快速对抗训练

Fast Adversarial Training With Adaptive Step Size.

作者信息

Huang Zhichao, Fan Yanbo, Liu Chen, Zhang Weizhong, Zhang Yong, Salzmann Mathieu, Susstrunk Sabine, Wang Jue

出版信息

IEEE Trans Image Process. 2023;32:6102-6114. doi: 10.1109/TIP.2023.3326398. Epub 2023 Nov 20.

DOI:10.1109/TIP.2023.3326398
PMID:37883291
Abstract

While adversarial training and its variants have shown to be the most effective algorithms to defend against adversarial attacks, their extremely slow training process makes it hard to scale to large datasets like ImageNet. The key idea of recent works to accelerate adversarial training is to substitute multi-step attacks (e.g., PGD) with single-step attacks (e.g., FGSM). However, these single-step methods suffer from catastrophic overfitting, where the accuracy against PGD attack suddenly drops to nearly 0% during training, and the network totally loses its robustness. In this work, we study the phenomenon from the perspective of training instances. We show that catastrophic overfitting is instance-dependent, and fitting instances with larger input gradient norm is more likely to cause catastrophic overfitting. Based on our findings, we propose a simple but effective method, Adversarial Training with Adaptive Step size (ATAS). ATAS learns an instance-wise adaptive step size that is inversely proportional to its gradient norm. Our theoretical analysis shows that ATAS converges faster than the commonly adopted non-adaptive counterparts. Empirically, ATAS consistently mitigates catastrophic overfitting and achieves higher robust accuracy on CIFAR10, CIFAR100, and ImageNet when evaluated on various adversarial budgets. Our code is released at https://github.com/HuangZhiChao95/ATAS.

摘要

虽然对抗训练及其变体已被证明是抵御对抗攻击最有效的算法,但其极其缓慢的训练过程使得难以扩展到像ImageNet这样的大型数据集。近期加速对抗训练的工作的关键思想是用单步攻击(如FGSM)替代多步攻击(如PGD)。然而,这些单步方法存在灾难性过拟合问题,即在训练期间针对PGD攻击的准确率会突然降至近0%,并且网络完全失去其鲁棒性。在这项工作中,我们从训练实例的角度研究这一现象。我们表明灾难性过拟合是依赖于实例的,并且拟合具有较大输入梯度范数的实例更有可能导致灾难性过拟合。基于我们的发现,我们提出了一种简单但有效的方法,即自适应步长对抗训练(ATAS)。ATAS学习一个与梯度范数成反比的实例级自适应步长。我们的理论分析表明,ATAS比常用的非自适应方法收敛得更快。从经验上看,在各种对抗预算下进行评估时,ATAS始终能减轻灾难性过拟合,并在CIFAR10、CIFAR100和ImageNet上实现更高的鲁棒准确率。我们的代码发布在https://github.com/HuangZhiChao95/ATAS 。

相似文献

1
Fast Adversarial Training With Adaptive Step Size.基于自适应步长的快速对抗训练
IEEE Trans Image Process. 2023;32:6102-6114. doi: 10.1109/TIP.2023.3326398. Epub 2023 Nov 20.
2
Towards improving fast adversarial training in multi-exit network.针对多出口网络中快速对抗训练的改进。
Neural Netw. 2022 Jun;150:1-11. doi: 10.1016/j.neunet.2022.02.015. Epub 2022 Feb 25.
3
Improving Fast Adversarial Training With Prior-Guided Knowledge.利用先验引导知识改进快速对抗训练
IEEE Trans Pattern Anal Mach Intell. 2024 Sep;46(9):6367-6383. doi: 10.1109/TPAMI.2024.3381180. Epub 2024 Aug 6.
4
Towards Adversarial Robustness for Multi-Mode Data through Metric Learning.通过度量学习实现多模态数据的对抗鲁棒性。
Sensors (Basel). 2023 Jul 5;23(13):6173. doi: 10.3390/s23136173.
5
Improving Adversarial Robustness Against Universal Patch Attacks Through Feature Norm Suppressing.通过特征范数抑制提高针对通用补丁攻击的对抗鲁棒性。
IEEE Trans Neural Netw Learn Syst. 2025 Jan;36(1):1410-1424. doi: 10.1109/TNNLS.2023.3326871. Epub 2025 Jan 7.
6
Boosting Fast Adversarial Training With Learnable Adversarial Initialization.通过可学习的对抗初始化加速快速对抗训练
IEEE Trans Image Process. 2022;31:4417-4430. doi: 10.1109/TIP.2022.3184255. Epub 2022 Jul 1.
7
Evaluation of GAN-Based Model for Adversarial Training.基于 GAN 的对抗训练模型评估。
Sensors (Basel). 2023 Mar 1;23(5):2697. doi: 10.3390/s23052697.
8
Boosting adversarial robustness via self-paced adversarial training.通过自步对抗训练提高对抗鲁棒性。
Neural Netw. 2023 Oct;167:706-714. doi: 10.1016/j.neunet.2023.08.063. Epub 2023 Sep 9.
9
Gradient Correction for White-Box Adversarial Attacks.白盒对抗攻击的梯度校正
IEEE Trans Neural Netw Learn Syst. 2024 Dec;35(12):18419-18430. doi: 10.1109/TNNLS.2023.3315414. Epub 2024 Dec 2.
10
Improving Adversarial Robustness of Deep Neural Networks via Adaptive Margin Evolution.通过自适应边际进化提高深度神经网络的对抗鲁棒性
Neurocomputing (Amst). 2023 Sep 28;551. doi: 10.1016/j.neucom.2023.126524. Epub 2023 Jul 7.

引用本文的文献

1
Avoiding catastrophic overfitting in fast adversarial training with adaptive similarity step size.在具有自适应相似度步长的快速对抗训练中避免灾难性过拟合。
PLoS One. 2025 Jan 7;20(1):e0317023. doi: 10.1371/journal.pone.0317023. eCollection 2025.