• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

在具有自适应相似度步长的快速对抗训练中避免灾难性过拟合。

Avoiding catastrophic overfitting in fast adversarial training with adaptive similarity step size.

作者信息

Zhao Jie-Chao, Ding Jin, Sun Yong-Zhi, Tan Ping, Ma Ji-En, Fang You-Tong

机构信息

School of Automation and Electrical Engineering & Key Institute of Robotics of Zhejiang Province, Zhejiang University of Science and Technology, Hangzhou, China.

State Key Laboratory of Fluid Power and Mechatronic Systems, Zhejiang University, Hangzhou, China.

出版信息

PLoS One. 2025 Jan 7;20(1):e0317023. doi: 10.1371/journal.pone.0317023. eCollection 2025.

DOI:10.1371/journal.pone.0317023
PMID:39774503
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11706396/
Abstract

Adversarial training has become a primary method for enhancing the robustness of deep learning models. In recent years, fast adversarial training methods have gained widespread attention due to their lower computational cost. However, since fast adversarial training uses single-step adversarial attacks instead of multi-step attacks, the generated adversarial examples lack diversity, making models prone to catastrophic overfitting and loss of robustness. Existing methods to prevent catastrophic overfitting have certain shortcomings, such as poor robustness due to insufficient strength of generated adversarial examples, and low accuracy caused by excessive total perturbation. To address these issues, this paper proposes a fast adversarial training method-fast adversarial training with adaptive similarity step size (ATSS). In this method, random noise is first added to the input clean samples, and the model then calculates the gradient for each input sample. The perturbation step size for each sample is determined based on the similarity between the input noise and the gradient direction. Finally, adversarial examples are generated based on the step size and gradient for adversarial training. We conduct various adversarial attack tests on ResNet18 and VGG19 models using the CIFAR-10, CIFAR-100 and Tiny ImageNet datasets. The experimental results demonstrate that our method effectively avoids catastrophic overfitting. And compared to other fast adversarial training methods, ATSS achieves higher robustness accuracy and clean accuracy, with almost no additional training cost.

摘要

对抗训练已成为增强深度学习模型鲁棒性的主要方法。近年来,快速对抗训练方法因其较低的计算成本而受到广泛关注。然而,由于快速对抗训练使用单步对抗攻击而非多步攻击,生成的对抗样本缺乏多样性,导致模型容易出现灾难性过拟合并丧失鲁棒性。现有的防止灾难性过拟合的方法存在一定缺陷,例如由于生成的对抗样本强度不足导致鲁棒性较差,以及由于总扰动过大导致准确率较低。为了解决这些问题,本文提出了一种快速对抗训练方法——自适应相似性步长快速对抗训练(ATSS)。在该方法中,首先将随机噪声添加到输入的干净样本中,然后模型为每个输入样本计算梯度。每个样本的扰动步长根据输入噪声与梯度方向之间的相似性来确定。最后,基于步长和梯度生成对抗样本用于对抗训练。我们使用CIFAR-10、CIFAR-100和Tiny ImageNet数据集对ResNet18和VGG19模型进行了各种对抗攻击测试。实验结果表明,我们的方法有效地避免了灾难性过拟合。并且与其他快速对抗训练方法相比,ATSS实现了更高的鲁棒性准确率和干净准确率,且几乎没有额外的训练成本。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11706396/5f7289c9a9f3/pone.0317023.g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11706396/cc6eeda80294/pone.0317023.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11706396/1205a0aed0e9/pone.0317023.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11706396/b1c2f082cdb8/pone.0317023.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11706396/f449bcc35aa6/pone.0317023.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11706396/0b4ad5a26f81/pone.0317023.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11706396/105265f8cbfd/pone.0317023.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11706396/547bb9624ae2/pone.0317023.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11706396/f4379e1a7f30/pone.0317023.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11706396/35d6c266bb73/pone.0317023.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11706396/5695008e8ac8/pone.0317023.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11706396/5f7289c9a9f3/pone.0317023.g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11706396/cc6eeda80294/pone.0317023.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11706396/1205a0aed0e9/pone.0317023.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11706396/b1c2f082cdb8/pone.0317023.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11706396/f449bcc35aa6/pone.0317023.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11706396/0b4ad5a26f81/pone.0317023.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11706396/105265f8cbfd/pone.0317023.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11706396/547bb9624ae2/pone.0317023.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11706396/f4379e1a7f30/pone.0317023.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11706396/35d6c266bb73/pone.0317023.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11706396/5695008e8ac8/pone.0317023.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60b1/11706396/5f7289c9a9f3/pone.0317023.g011.jpg

相似文献

1
Avoiding catastrophic overfitting in fast adversarial training with adaptive similarity step size.在具有自适应相似度步长的快速对抗训练中避免灾难性过拟合。
PLoS One. 2025 Jan 7;20(1):e0317023. doi: 10.1371/journal.pone.0317023. eCollection 2025.
2
Fast Adversarial Training With Adaptive Step Size.基于自适应步长的快速对抗训练
IEEE Trans Image Process. 2023;32:6102-6114. doi: 10.1109/TIP.2023.3326398. Epub 2023 Nov 20.
3
Towards improving fast adversarial training in multi-exit network.针对多出口网络中快速对抗训练的改进。
Neural Netw. 2022 Jun;150:1-11. doi: 10.1016/j.neunet.2022.02.015. Epub 2022 Feb 25.
4
Adversarial Robustness Enhancement for Deep Learning-Based Soft Sensors: An Adversarial Training Strategy Using Historical Gradients and Domain Adaptation.基于深度学习的软传感器的对抗鲁棒性增强:一种使用历史梯度和域适应的对抗训练策略
Sensors (Basel). 2024 Jun 17;24(12):3909. doi: 10.3390/s24123909.
5
Boosting adversarial robustness via self-paced adversarial training.通过自步对抗训练提高对抗鲁棒性。
Neural Netw. 2023 Oct;167:706-714. doi: 10.1016/j.neunet.2023.08.063. Epub 2023 Sep 9.
6
Improving the Robustness of Deep-Learning Models in Predicting Hematoma Expansion from Admission Head CT.提高深度学习模型预测入院时头部CT血肿扩大的稳健性。
AJNR Am J Neuroradiol. 2025 Jan 10. doi: 10.3174/ajnr.A8650.
7
Improving Fast Adversarial Training With Prior-Guided Knowledge.利用先验引导知识改进快速对抗训练
IEEE Trans Pattern Anal Mach Intell. 2024 Sep;46(9):6367-6383. doi: 10.1109/TPAMI.2024.3381180. Epub 2024 Aug 6.
8
Adv-BDPM: Adversarial attack based on Boundary Diffusion Probability Model.Adv-BDPM:基于边界扩散概率模型的对抗攻击。
Neural Netw. 2023 Oct;167:730-740. doi: 10.1016/j.neunet.2023.08.048. Epub 2023 Sep 9.
9
Progressive Diversified Augmentation for General Robustness of DNNs: A Unified Approach.渐进多样化增强:提高 DNN 泛化鲁棒性的统一方法
IEEE Trans Image Process. 2021;30:8955-8967. doi: 10.1109/TIP.2021.3121150. Epub 2021 Oct 29.
10
Auto encoder-based defense mechanism against popular adversarial attacks in deep learning.基于自动编码器的深度学习中流行对抗攻击防御机制。
PLoS One. 2024 Oct 21;19(10):e0307363. doi: 10.1371/journal.pone.0307363. eCollection 2024.

本文引用的文献

1
Computationally intelligent real-time security surveillance system in the education sector using deep learning.基于深度学习的教育领域计算智能实时安全监控系统。
PLoS One. 2024 Jul 11;19(7):e0301908. doi: 10.1371/journal.pone.0301908. eCollection 2024.
2
An automatic driving trajectory planning approach in complex traffic scenarios based on integrated driver style inference and deep reinforcement learning.一种基于集成驾驶员风格推理和深度强化学习的复杂交通场景下自动驾驶轨迹规划方法。
PLoS One. 2024 Jan 25;19(1):e0297192. doi: 10.1371/journal.pone.0297192. eCollection 2024.
3
Fast Adversarial Training With Adaptive Step Size.
基于自适应步长的快速对抗训练
IEEE Trans Image Process. 2023;32:6102-6114. doi: 10.1109/TIP.2023.3326398. Epub 2023 Nov 20.
4
Deep learning in chest radiography: Detection of findings and presence of change.深度学习在胸部 X 光摄影中的应用:检测结果和变化的存在。
PLoS One. 2018 Oct 4;13(10):e0204155. doi: 10.1371/journal.pone.0204155. eCollection 2018.