• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过自适应边际进化提高深度神经网络的对抗鲁棒性

Improving Adversarial Robustness of Deep Neural Networks via Adaptive Margin Evolution.

作者信息

Ma Linhai, Liang Liang

机构信息

Department of Computer Science, University of Miami, 1365 Memorial Drive, Coral Gables, 33146, FL, USA.

出版信息

Neurocomputing (Amst). 2023 Sep 28;551. doi: 10.1016/j.neucom.2023.126524. Epub 2023 Jul 7.

DOI:10.1016/j.neucom.2023.126524
PMID:37587916
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10426748/
Abstract

Adversarial training is the most popular and general strategy to improve Deep Neural Network (DNN) robustness against adversarial noises. Many adversarial training methods have been proposed in the past few years. However, most of these methods are highly susceptible to hyperparameters, especially the training noise upper bound. Tuning these hyperparameters is expensive and difficult for people not in the adversarial robustness research domain, which prevents adversarial training techniques from being used in many application fields. In this study, we propose a new adversarial training method, named Adaptive Margin Evolution (AME). Besides being hyperparameter-free for the user, our AME method places adversarial training samples into the optimal locations in the input space by gradually expanding the exploration range with self-adaptive and gradient-aware step sizes. We evaluate AME and the other seven well-known adversarial training methods on three common benchmark datasets (CIFAR10, SVHN, and Tiny ImageNet) under the most challenging adversarial attack: AutoAttack. The results show that: (1) On the three datasets, AME has the best overall performance; (2) On the Tiny ImageNet dataset, which is much more challenging, AME has the best performance at every noise level. Our work may pave the way for adopting adversarial training techniques in application domains where hyperparameter-free methods are preferred.

摘要

对抗训练是提高深度神经网络(DNN)对抗对抗噪声鲁棒性的最流行且通用的策略。在过去几年中已经提出了许多对抗训练方法。然而,这些方法中的大多数对超参数高度敏感,尤其是训练噪声上限。对于不在对抗鲁棒性研究领域的人来说,调整这些超参数既昂贵又困难,这阻碍了对抗训练技术在许多应用领域的使用。在本研究中,我们提出了一种新的对抗训练方法,名为自适应边际进化(AME)。除了对用户无超参数要求外,我们的AME方法通过以自适应和梯度感知步长逐渐扩大探索范围,将对抗训练样本放置在输入空间的最佳位置。我们在最具挑战性的对抗攻击:自动攻击下,在三个常见的基准数据集(CIFAR10、SVHN和Tiny ImageNet)上评估了AME和其他七种著名的对抗训练方法。结果表明:(1)在这三个数据集上,AME具有最佳的整体性能;(2)在更具挑战性的Tiny ImageNet数据集上,AME在每个噪声水平下都具有最佳性能。我们的工作可能为在更喜欢无超参数方法的应用领域采用对抗训练技术铺平道路。

相似文献

1
Improving Adversarial Robustness of Deep Neural Networks via Adaptive Margin Evolution.通过自适应边际进化提高深度神经网络的对抗鲁棒性
Neurocomputing (Amst). 2023 Sep 28;551. doi: 10.1016/j.neucom.2023.126524. Epub 2023 Jul 7.
2
Increasing-Margin Adversarial (IMA) training to improve adversarial robustness of neural networks.基于增加间隔的对抗(IMA)训练来提高神经网络的对抗鲁棒性。
Comput Methods Programs Biomed. 2023 Oct;240:107687. doi: 10.1016/j.cmpb.2023.107687. Epub 2023 Jun 24.
3
Improving Deep Neural Networks' Training for Image Classification With Nonlinear Conjugate Gradient-Style Adaptive Momentum.使用非线性共轭梯度风格的自适应动量改进深度神经网络的图像分类训练
IEEE Trans Neural Netw Learn Syst. 2024 Sep;35(9):12288-12300. doi: 10.1109/TNNLS.2023.3255783. Epub 2024 Sep 3.
4
Towards Adversarial Robustness for Multi-Mode Data through Metric Learning.通过度量学习实现多模态数据的对抗鲁棒性。
Sensors (Basel). 2023 Jul 5;23(13):6173. doi: 10.3390/s23136173.
5
A regularization method to improve adversarial robustness of neural networks for ECG signal classification.一种提高神经网络对抗鲁棒性的正则化方法,用于 ECG 信号分类。
Comput Biol Med. 2022 May;144:105345. doi: 10.1016/j.compbiomed.2022.105345. Epub 2022 Feb 24.
6
Between-Class Adversarial Training for Improving Adversarial Robustness of Image Classification.基于类间对抗训练提高图像分类对抗鲁棒性。
Sensors (Basel). 2023 Mar 20;23(6):3252. doi: 10.3390/s23063252.
7
Fast Adversarial Training With Adaptive Step Size.基于自适应步长的快速对抗训练
IEEE Trans Image Process. 2023;32:6102-6114. doi: 10.1109/TIP.2023.3326398. Epub 2023 Nov 20.
8
Progressive Diversified Augmentation for General Robustness of DNNs: A Unified Approach.渐进多样化增强:提高 DNN 泛化鲁棒性的统一方法
IEEE Trans Image Process. 2021;30:8955-8967. doi: 10.1109/TIP.2021.3121150. Epub 2021 Oct 29.
9
Improving Adversarial Robustness via Attention and Adversarial Logit Pairing.通过注意力机制和对抗性逻辑对配对提高对抗鲁棒性
Front Artif Intell. 2022 Jan 27;4:752831. doi: 10.3389/frai.2021.752831. eCollection 2021.
10
Training Robust Deep Neural Networks via Adversarial Noise Propagation.通过对抗噪声传播训练稳健的深度神经网络。
IEEE Trans Image Process. 2021;30:5769-5781. doi: 10.1109/TIP.2021.3082317.

引用本文的文献

1
A general approach to improve adversarial robustness of DNNs for medical image segmentation and detection.一种提高用于医学图像分割和检测的深度神经网络对抗鲁棒性的通用方法。
Proc SPIE Int Soc Opt Eng. 2024 Feb;12926. doi: 10.1117/12.3006534. Epub 2024 Apr 2.

本文引用的文献

1
A regularization method to improve adversarial robustness of neural networks for ECG signal classification.一种提高神经网络对抗鲁棒性的正则化方法,用于 ECG 信号分类。
Comput Biol Med. 2022 May;144:105345. doi: 10.1016/j.compbiomed.2022.105345. Epub 2022 Feb 24.
2
Opportunities and challenges of deep learning methods for electrocardiogram data: A systematic review.深度学习方法在心电图数据中的机遇与挑战:一项系统综述。
Comput Biol Med. 2020 Jul;122:103801. doi: 10.1016/j.compbiomed.2020.103801. Epub 2020 Jun 7.
3
Deep learning models for electrocardiograms are susceptible to adversarial attack.
深度学习模型在心电图分析中容易受到对抗攻击。
Nat Med. 2020 Mar;26(3):360-363. doi: 10.1038/s41591-020-0791-x. Epub 2020 Mar 9.