• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过自步对抗训练提高对抗鲁棒性。

Boosting adversarial robustness via self-paced adversarial training.

机构信息

School of Information Science and Engineering, Chongqing Jiaotong University, Chongqing, China; School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China.

School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China.

出版信息

Neural Netw. 2023 Oct;167:706-714. doi: 10.1016/j.neunet.2023.08.063. Epub 2023 Sep 9.

DOI:10.1016/j.neunet.2023.08.063
PMID:37729786
Abstract

Adversarial training is considered one of the most effective methods to improve the adversarial robustness of deep neural networks. Despite the success, it still suffers from unsatisfactory performance and overfitting. Considering the intrinsic mechanism of adversarial training, recent studies adopt the idea of curriculum learning to alleviate overfitting. However, this also introduces new issues, that is, lacking the quantitative criterion for attacks' strength and catastrophic forgetting. To mitigate such issues, we propose the self-paced adversarial training (SPAT), which explicitly builds the learning process of adversarial training based on adversarial examples of the whole dataset. Specifically, our model is first trained with "easy" adversarial examples, and then is continuously enhanced by gradually adding "complex" adversarial examples. This way strengthens the ability to fit "complex" adversarial examples while holding in mind "easy" adversarial samples. To balance adversarial examples between classes, we determine the difficulty of the adversarial examples locally in each class. Notably, this learning paradigm can also be incorporated into other advanced methods for further boosting adversarial robustness. Experimental results show the effectiveness of our proposed model against various attacks on widely-used benchmarks. Especially, on CIFAR100, SPAT provides a boost of 1.7% (relatively 5.4%) in robust accuracy on the PGD10 attack and 3.9% (relatively 7.2%) in natural accuracy for AWP.

摘要

对抗训练被认为是提高深度神经网络对抗鲁棒性的最有效方法之一。尽管取得了成功,但它仍然存在性能不佳和过拟合的问题。考虑到对抗训练的内在机制,最近的研究采用课程学习的思想来缓解过拟合。然而,这也引入了新的问题,即缺乏攻击强度的定量标准和灾难性遗忘。为了解决这些问题,我们提出了自步对抗训练(SPAT),它基于整个数据集的对抗样本来显式地构建对抗训练的学习过程。具体来说,我们的模型首先用“简单”的对抗样本来训练,然后通过逐渐添加“复杂”的对抗样本来不断增强。这种方式在记住“简单”对抗样本的同时,增强了对“复杂”对抗样本的适应能力。为了在类之间平衡对抗样例,我们在每个类中本地确定对抗样例的难度。值得注意的是,这种学习范式也可以被纳入其他先进的方法中,以进一步提高对抗鲁棒性。实验结果表明,我们提出的模型在广泛使用的基准上针对各种攻击都具有有效性。特别是在 CIFAR100 上,SPAT 在 PGD10 攻击下的鲁棒准确率提高了 1.7%(相对提高 5.4%),在 AWP 下的自然准确率提高了 3.9%(相对提高 7.2%)。

相似文献

1
Boosting adversarial robustness via self-paced adversarial training.通过自步对抗训练提高对抗鲁棒性。
Neural Netw. 2023 Oct;167:706-714. doi: 10.1016/j.neunet.2023.08.063. Epub 2023 Sep 9.
2
Avoiding catastrophic overfitting in fast adversarial training with adaptive similarity step size.在具有自适应相似度步长的快速对抗训练中避免灾难性过拟合。
PLoS One. 2025 Jan 7;20(1):e0317023. doi: 10.1371/journal.pone.0317023. eCollection 2025.
3
Between-Class Adversarial Training for Improving Adversarial Robustness of Image Classification.基于类间对抗训练提高图像分类对抗鲁棒性。
Sensors (Basel). 2023 Mar 20;23(6):3252. doi: 10.3390/s23063252.
4
Towards Adversarial Robustness for Multi-Mode Data through Metric Learning.通过度量学习实现多模态数据的对抗鲁棒性。
Sensors (Basel). 2023 Jul 5;23(13):6173. doi: 10.3390/s23136173.
5
Toward Intrinsic Adversarial Robustness Through Probabilistic Training.通过概率训练实现内在对抗鲁棒性。
IEEE Trans Image Process. 2023;32:3862-3872. doi: 10.1109/TIP.2023.3290532. Epub 2023 Jul 14.
6
Boosting the transferability of adversarial examples via stochastic serial attack.通过随机串行攻击提升对抗样本的可转移性。
Neural Netw. 2022 Jun;150:58-67. doi: 10.1016/j.neunet.2022.02.025. Epub 2022 Mar 7.
7
Towards improving fast adversarial training in multi-exit network.针对多出口网络中快速对抗训练的改进。
Neural Netw. 2022 Jun;150:1-11. doi: 10.1016/j.neunet.2022.02.015. Epub 2022 Feb 25.
8
Towards evaluating the robustness of deep diagnostic models by adversarial attack.通过对抗攻击评估深度诊断模型的稳健性。
Med Image Anal. 2021 Apr;69:101977. doi: 10.1016/j.media.2021.101977. Epub 2021 Jan 22.
9
Adversarial Robustness of Deep Reinforcement Learning Based Dynamic Recommender Systems.基于深度强化学习的动态推荐系统的对抗鲁棒性
Front Big Data. 2022 May 3;5:822783. doi: 10.3389/fdata.2022.822783. eCollection 2022.
10
Mitigating Accuracy-Robustness Trade-Off via Balanced Multi-Teacher Adversarial Distillation.通过平衡多教师对抗性蒸馏减轻准确性-鲁棒性权衡
IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):9338-9352. doi: 10.1109/TPAMI.2024.3416308. Epub 2024 Nov 6.

引用本文的文献

1
Prediction of Cerebrospinal Fluid (CSF) Pressure with Generative Adversarial Network Synthetic Plasma-CSF Biomarker Pairing.利用生成对抗网络合成血浆-脑脊液生物标志物配对预测脑脊液(CSF)压力
Neuroinformatics. 2025 Jul 10;23(3):38. doi: 10.1007/s12021-025-09729-2.
2
Universal attention guided adversarial defense using feature pyramid and non-local mechanisms.基于特征金字塔和非局部机制的通用注意力引导对抗防御。
Sci Rep. 2025 Feb 12;15(1):5237. doi: 10.1038/s41598-025-89267-8.
3
Enhanced detection of accounting fraud using a CNN-LSTM-Attention model optimized by Sparrow search.
利用麻雀搜索优化的卷积神经网络-长短期记忆网络-注意力模型增强会计欺诈检测
PeerJ Comput Sci. 2024 Nov 26;10:e2532. doi: 10.7717/peerj-cs.2532. eCollection 2024.