Suppr超能文献

通过自适应边际进化提高深度神经网络的对抗鲁棒性

Improving Adversarial Robustness of Deep Neural Networks via Adaptive Margin Evolution.

作者信息

Ma Linhai, Liang Liang

机构信息

Department of Computer Science, University of Miami, 1365 Memorial Drive, Coral Gables, 33146, FL, USA.

出版信息

Neurocomputing (Amst). 2023 Sep 28;551. doi: 10.1016/j.neucom.2023.126524. Epub 2023 Jul 7.

Abstract

Adversarial training is the most popular and general strategy to improve Deep Neural Network (DNN) robustness against adversarial noises. Many adversarial training methods have been proposed in the past few years. However, most of these methods are highly susceptible to hyperparameters, especially the training noise upper bound. Tuning these hyperparameters is expensive and difficult for people not in the adversarial robustness research domain, which prevents adversarial training techniques from being used in many application fields. In this study, we propose a new adversarial training method, named Adaptive Margin Evolution (AME). Besides being hyperparameter-free for the user, our AME method places adversarial training samples into the optimal locations in the input space by gradually expanding the exploration range with self-adaptive and gradient-aware step sizes. We evaluate AME and the other seven well-known adversarial training methods on three common benchmark datasets (CIFAR10, SVHN, and Tiny ImageNet) under the most challenging adversarial attack: AutoAttack. The results show that: (1) On the three datasets, AME has the best overall performance; (2) On the Tiny ImageNet dataset, which is much more challenging, AME has the best performance at every noise level. Our work may pave the way for adopting adversarial training techniques in application domains where hyperparameter-free methods are preferred.

摘要

对抗训练是提高深度神经网络(DNN)对抗对抗噪声鲁棒性的最流行且通用的策略。在过去几年中已经提出了许多对抗训练方法。然而,这些方法中的大多数对超参数高度敏感,尤其是训练噪声上限。对于不在对抗鲁棒性研究领域的人来说,调整这些超参数既昂贵又困难,这阻碍了对抗训练技术在许多应用领域的使用。在本研究中,我们提出了一种新的对抗训练方法,名为自适应边际进化(AME)。除了对用户无超参数要求外,我们的AME方法通过以自适应和梯度感知步长逐渐扩大探索范围,将对抗训练样本放置在输入空间的最佳位置。我们在最具挑战性的对抗攻击:自动攻击下,在三个常见的基准数据集(CIFAR10、SVHN和Tiny ImageNet)上评估了AME和其他七种著名的对抗训练方法。结果表明:(1)在这三个数据集上,AME具有最佳的整体性能;(2)在更具挑战性的Tiny ImageNet数据集上,AME在每个噪声水平下都具有最佳性能。我们的工作可能为在更喜欢无超参数方法的应用领域采用对抗训练技术铺平道路。

相似文献

7
Fast Adversarial Training With Adaptive Step Size.基于自适应步长的快速对抗训练
IEEE Trans Image Process. 2023;32:6102-6114. doi: 10.1109/TIP.2023.3326398. Epub 2023 Nov 20.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验