Suppr超能文献

利用噪声数据增强框架和随机擦除提高对抗样本的可迁移性

Improving the Transferability of Adversarial Examples With a Noise Data Enhancement Framework and Random Erasing.

作者信息

Xie Pengfei, Shi Shuhao, Yang Shuai, Qiao Kai, Liang Ningning, Wang Linyuan, Chen Jian, Hu Guoen, Yan Bin

机构信息

Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategy Support Force Information Engineering University, Zhengzhou, China.

出版信息

Front Neurorobot. 2021 Dec 9;15:784053. doi: 10.3389/fnbot.2021.784053. eCollection 2021.

Abstract

Deep neural networks (DNNs) are proven vulnerable to attack against adversarial examples. Black-box transfer attacks pose a massive threat to AI applications without accessing target models. At present, the most effective black-box attack methods mainly adopt data enhancement methods, such as input transformation. Previous data enhancement frameworks only work on input transformations that satisfy accuracy or loss invariance. However, it does not work for other transformations that do not meet the above conditions, such as the transformation which will lose information. To solve this problem, we propose a new noise data enhancement framework (NDEF), which only transforms adversarial perturbation to avoid the above issues effectively. In addition, we introduce random erasing under this framework to prevent the over-fitting of adversarial examples. Experimental results show that the black-box attack success rate of our method Random Erasing Iterative Fast Gradient Sign Method (REI-FGSM) is 4.2% higher than DI-FGSM in six models on average and 6.6% higher than DI-FGSM in three defense models. REI-FGSM can combine with other methods to achieve excellent performance. The attack performance of SI-FGSM can be improved by 22.9% on average when combined with REI-FGSM. Besides, our combined version with DI-TI-MI-FGSM, i.e., DI-TI-MI-REI-FGSM can achieve an average attack success rate of 97.0% against three ensemble adversarial training models, which is greater than the current gradient iterative attack method. We also introduce Gaussian blur to prove the compatibility of our framework.

摘要

深度神经网络(DNN)已被证明容易受到对抗样本攻击。黑盒迁移攻击在不访问目标模型的情况下对人工智能应用构成了巨大威胁。目前,最有效的黑盒攻击方法主要采用数据增强方法,如输入变换。以前的数据增强框架仅适用于满足准确性或损失不变性的输入变换。然而,它不适用于其他不满足上述条件的变换,例如会丢失信息的变换。为了解决这个问题,我们提出了一种新的噪声数据增强框架(NDEF),它只对对抗性扰动进行变换,从而有效避免上述问题。此外,我们在这个框架下引入随机擦除来防止对抗样本的过拟合。实验结果表明,我们的方法随机擦除迭代快速梯度符号法(REI-FGSM)在六个模型上的黑盒攻击成功率平均比DI-FGSM高4.2%,在三个防御模型上比DI-FGSM高6.6%。REI-FGSM可以与其他方法结合以实现优异的性能。当与REI-FGSM结合时,SI-FGSM的攻击性能平均可提高22.9%。此外,我们与DI-TI-MI-FGSM的组合版本,即DI-TI-MI-REI-FGSM对三个集成对抗训练模型的平均攻击成功率可达97.0%,高于当前的梯度迭代攻击方法。我们还引入高斯模糊来证明我们框架的兼容性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7519/8696674/2cc687a99086/fnbot-15-784053-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验