Suppr超能文献

GradDiv:通过梯度多样性正则化实现随机神经网络的对抗鲁棒性

GradDiv: Adversarial Robustness of Randomized Neural Networks via Gradient Diversity Regularization.

作者信息

Lee Sungyoon, Kim Hoki, Lee Jaewook

出版信息

IEEE Trans Pattern Anal Mach Intell. 2023 Feb;45(2):2645-2651. doi: 10.1109/TPAMI.2022.3169217. Epub 2023 Jan 6.

Abstract

Deep learning is vulnerable to adversarial examples. Many defenses based on randomized neural networks have been proposed to solve the problem, but fail to achieve robustness against attacks using proxy gradients such as the Expectation over Transformation (EOT) attack. We investigate the effect of the adversarial attacks using proxy gradients on randomized neural networks and demonstrate that it highly relies on the directional distribution of the loss gradients of the randomized neural network. We show in particular that proxy gradients are less effective when the gradients are more scattered. To this end, we propose Gradient Diversity (GradDiv) regularizations that minimize the concentration of the gradients to build a robust randomized neural network. Our experiments on MNIST, CIFAR10, and STL10 show that our proposed GradDiv regularizations improve the adversarial robustness of randomized neural networks against a variety of state-of-the-art attack methods. Moreover, our method efficiently reduces the transferability among sample models of randomized neural networks.

摘要

深度学习容易受到对抗样本的影响。为了解决这个问题,人们提出了许多基于随机神经网络的防御方法,但这些方法在抵御使用代理梯度的攻击(如期望变换(EOT)攻击)时未能实现鲁棒性。我们研究了使用代理梯度的对抗攻击对随机神经网络的影响,并证明其高度依赖于随机神经网络损失梯度的方向分布。我们特别表明,当梯度更加分散时,代理梯度的效果较差。为此,我们提出了梯度多样性(GradDiv)正则化方法,该方法可最小化梯度的集中度,以构建一个鲁棒的随机神经网络。我们在MNIST、CIFAR10和STL10上的实验表明,我们提出的GradDiv正则化方法提高了随机神经网络对各种先进攻击方法的对抗鲁棒性。此外,我们的方法有效地降低了随机神经网络样本模型之间的可转移性。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验