• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

利用先验引导知识改进快速对抗训练

Improving Fast Adversarial Training With Prior-Guided Knowledge.

作者信息

Jia Xiaojun, Zhang Yong, Wei Xingxing, Wu Baoyuan, Ma Ke, Wang Jue, Cao Xiaochun

出版信息

IEEE Trans Pattern Anal Mach Intell. 2024 Sep;46(9):6367-6383. doi: 10.1109/TPAMI.2024.3381180. Epub 2024 Aug 6.

DOI:10.1109/TPAMI.2024.3381180
PMID:38530739
Abstract

Fast adversarial training (FAT) is an efficient method to improve robustness in white-box attack scenarios. However, the original FAT suffers from catastrophic overfitting, which dramatically and suddenly reduces robustness after a few training epochs. Although various FAT variants have been proposed to prevent overfitting, they require high training time. In this paper, we investigate the relationship between adversarial example quality and catastrophic overfitting by comparing the training processes of standard adversarial training and FAT. We find that catastrophic overfitting occurs when the attack success rate of adversarial examples becomes worse. Based on this observation, we propose a positive prior-guided adversarial initialization to prevent overfitting by improving adversarial example quality without extra training time. This initialization is generated by using high-quality adversarial perturbations from the historical training process. We provide theoretical analysis for the proposed initialization and propose a prior-guided regularization method that boosts the smoothness of the loss function. Additionally, we design a prior-guided ensemble FAT method that averages the different model weights of historical models using different decay rates. Our proposed method, called FGSM-PGK, assembles the prior-guided knowledge, i.e., the prior-guided initialization and model weights, acquired during the historical training process. The proposed method can effectively improve the model's adversarial robustness in white-box attack scenarios. Evaluations of four datasets demonstrate the superiority of the proposed method.

摘要

快速对抗训练(FAT)是一种在白盒攻击场景中提高鲁棒性的有效方法。然而,原始的FAT存在灾难性过拟合问题,即在几个训练轮次后,鲁棒性会急剧且突然下降。尽管已经提出了各种FAT变体来防止过拟合,但它们需要很长的训练时间。在本文中,我们通过比较标准对抗训练和FAT的训练过程,研究对抗样本质量与灾难性过拟合之间的关系。我们发现,当对抗样本的攻击成功率变差时,就会发生灾难性过拟合。基于这一观察结果,我们提出了一种正先验引导的对抗初始化方法,通过在不增加额外训练时间的情况下提高对抗样本质量来防止过拟合。这种初始化是通过使用来自历史训练过程的高质量对抗扰动生成的。我们对所提出的初始化方法进行了理论分析,并提出了一种先验引导的正则化方法,该方法可以提高损失函数的平滑度。此外,我们设计了一种先验引导的集成FAT方法,该方法使用不同的衰减率对历史模型的不同模型权重进行平均。我们提出的方法称为FGSM-PGK,它整合了在历史训练过程中获得的先验引导知识,即先验引导的初始化和模型权重。该方法可以有效地提高模型在白盒攻击场景中的对抗鲁棒性。对四个数据集的评估证明了所提出方法的优越性。

相似文献

1
Improving Fast Adversarial Training With Prior-Guided Knowledge.利用先验引导知识改进快速对抗训练
IEEE Trans Pattern Anal Mach Intell. 2024 Sep;46(9):6367-6383. doi: 10.1109/TPAMI.2024.3381180. Epub 2024 Aug 6.
2
Towards improving fast adversarial training in multi-exit network.针对多出口网络中快速对抗训练的改进。
Neural Netw. 2022 Jun;150:1-11. doi: 10.1016/j.neunet.2022.02.015. Epub 2022 Feb 25.
3
Boosting Fast Adversarial Training With Learnable Adversarial Initialization.通过可学习的对抗初始化加速快速对抗训练
IEEE Trans Image Process. 2022;31:4417-4430. doi: 10.1109/TIP.2022.3184255. Epub 2022 Jul 1.
4
Fast Adversarial Training With Adaptive Step Size.基于自适应步长的快速对抗训练
IEEE Trans Image Process. 2023;32:6102-6114. doi: 10.1109/TIP.2023.3326398. Epub 2023 Nov 20.
5
Improving the Transferability of Adversarial Examples With a Noise Data Enhancement Framework and Random Erasing.利用噪声数据增强框架和随机擦除提高对抗样本的可迁移性
Front Neurorobot. 2021 Dec 9;15:784053. doi: 10.3389/fnbot.2021.784053. eCollection 2021.
6
Boosting adversarial robustness via self-paced adversarial training.通过自步对抗训练提高对抗鲁棒性。
Neural Netw. 2023 Oct;167:706-714. doi: 10.1016/j.neunet.2023.08.063. Epub 2023 Sep 9.
7
Untargeted white-box adversarial attack to break into deep leaning based COVID-19 monitoring face mask detection system.针对基于深度学习的新冠肺炎监测口罩检测系统的无目标白盒对抗攻击。
Multimed Tools Appl. 2023 May 5:1-27. doi: 10.1007/s11042-023-15405-x.
8
Between-Class Adversarial Training for Improving Adversarial Robustness of Image Classification.基于类间对抗训练提高图像分类对抗鲁棒性。
Sensors (Basel). 2023 Mar 20;23(6):3252. doi: 10.3390/s23063252.
9
A medical image classification method based on self-regularized adversarial learning.基于自正则化对抗学习的医学图像分类方法。
Med Phys. 2024 Nov;51(11):8232-8246. doi: 10.1002/mp.17320. Epub 2024 Jul 30.
10
Toward Intrinsic Adversarial Robustness Through Probabilistic Training.通过概率训练实现内在对抗鲁棒性。
IEEE Trans Image Process. 2023;32:3862-3872. doi: 10.1109/TIP.2023.3290532. Epub 2023 Jul 14.