• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

利用噪声数据增强框架和随机擦除提高对抗样本的可迁移性

Improving the Transferability of Adversarial Examples With a Noise Data Enhancement Framework and Random Erasing.

作者信息

Xie Pengfei, Shi Shuhao, Yang Shuai, Qiao Kai, Liang Ningning, Wang Linyuan, Chen Jian, Hu Guoen, Yan Bin

机构信息

Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategy Support Force Information Engineering University, Zhengzhou, China.

出版信息

Front Neurorobot. 2021 Dec 9;15:784053. doi: 10.3389/fnbot.2021.784053. eCollection 2021.

DOI:10.3389/fnbot.2021.784053
PMID:34955802
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8696674/
Abstract

Deep neural networks (DNNs) are proven vulnerable to attack against adversarial examples. Black-box transfer attacks pose a massive threat to AI applications without accessing target models. At present, the most effective black-box attack methods mainly adopt data enhancement methods, such as input transformation. Previous data enhancement frameworks only work on input transformations that satisfy accuracy or loss invariance. However, it does not work for other transformations that do not meet the above conditions, such as the transformation which will lose information. To solve this problem, we propose a new noise data enhancement framework (NDEF), which only transforms adversarial perturbation to avoid the above issues effectively. In addition, we introduce random erasing under this framework to prevent the over-fitting of adversarial examples. Experimental results show that the black-box attack success rate of our method Random Erasing Iterative Fast Gradient Sign Method (REI-FGSM) is 4.2% higher than DI-FGSM in six models on average and 6.6% higher than DI-FGSM in three defense models. REI-FGSM can combine with other methods to achieve excellent performance. The attack performance of SI-FGSM can be improved by 22.9% on average when combined with REI-FGSM. Besides, our combined version with DI-TI-MI-FGSM, i.e., DI-TI-MI-REI-FGSM can achieve an average attack success rate of 97.0% against three ensemble adversarial training models, which is greater than the current gradient iterative attack method. We also introduce Gaussian blur to prove the compatibility of our framework.

摘要

深度神经网络(DNN)已被证明容易受到对抗样本攻击。黑盒迁移攻击在不访问目标模型的情况下对人工智能应用构成了巨大威胁。目前,最有效的黑盒攻击方法主要采用数据增强方法,如输入变换。以前的数据增强框架仅适用于满足准确性或损失不变性的输入变换。然而,它不适用于其他不满足上述条件的变换,例如会丢失信息的变换。为了解决这个问题,我们提出了一种新的噪声数据增强框架(NDEF),它只对对抗性扰动进行变换,从而有效避免上述问题。此外,我们在这个框架下引入随机擦除来防止对抗样本的过拟合。实验结果表明,我们的方法随机擦除迭代快速梯度符号法(REI-FGSM)在六个模型上的黑盒攻击成功率平均比DI-FGSM高4.2%,在三个防御模型上比DI-FGSM高6.6%。REI-FGSM可以与其他方法结合以实现优异的性能。当与REI-FGSM结合时,SI-FGSM的攻击性能平均可提高22.9%。此外,我们与DI-TI-MI-FGSM的组合版本,即DI-TI-MI-REI-FGSM对三个集成对抗训练模型的平均攻击成功率可达97.0%,高于当前的梯度迭代攻击方法。我们还引入高斯模糊来证明我们框架的兼容性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7519/8696674/1688cca86499/fnbot-15-784053-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7519/8696674/2cc687a99086/fnbot-15-784053-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7519/8696674/cd3ec96e2e22/fnbot-15-784053-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7519/8696674/be917a738a0c/fnbot-15-784053-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7519/8696674/98248c26d81d/fnbot-15-784053-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7519/8696674/81903cfd6731/fnbot-15-784053-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7519/8696674/1688cca86499/fnbot-15-784053-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7519/8696674/2cc687a99086/fnbot-15-784053-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7519/8696674/cd3ec96e2e22/fnbot-15-784053-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7519/8696674/be917a738a0c/fnbot-15-784053-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7519/8696674/98248c26d81d/fnbot-15-784053-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7519/8696674/81903cfd6731/fnbot-15-784053-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7519/8696674/1688cca86499/fnbot-15-784053-g0006.jpg

相似文献

1
Improving the Transferability of Adversarial Examples With a Noise Data Enhancement Framework and Random Erasing.利用噪声数据增强框架和随机擦除提高对抗样本的可迁移性
Front Neurorobot. 2021 Dec 9;15:784053. doi: 10.3389/fnbot.2021.784053. eCollection 2021.
2
Gradient Correction for White-Box Adversarial Attacks.白盒对抗攻击的梯度校正
IEEE Trans Neural Netw Learn Syst. 2024 Dec;35(12):18419-18430. doi: 10.1109/TNNLS.2023.3315414. Epub 2024 Dec 2.
3
Strengthening transferability of adversarial examples by adaptive inertia and amplitude spectrum dropout.通过自适应惯性和幅度谱丢弃增强对抗样本的可转移性。
Neural Netw. 2023 Aug;165:925-937. doi: 10.1016/j.neunet.2023.06.031. Epub 2023 Jun 30.
4
Robustifying Deep Networks for Medical Image Segmentation.稳健化深度网络在医学图像分割中的应用。
J Digit Imaging. 2021 Oct;34(5):1279-1293. doi: 10.1007/s10278-021-00507-5. Epub 2021 Sep 20.
5
SMGEA: A New Ensemble Adversarial Attack Powered by Long-Term Gradient Memories.SMGEA:一种由长期梯度记忆驱动的新型集成对抗攻击。
IEEE Trans Neural Netw Learn Syst. 2022 Mar;33(3):1051-1065. doi: 10.1109/TNNLS.2020.3039295. Epub 2022 Feb 28.
6
Adversarial Attacks against Deep-Learning-Based Automatic Dependent Surveillance-Broadcast Unsupervised Anomaly Detection Models in the Context of Air Traffic Management.空中交通管理背景下针对基于深度学习的自动相关监视广播无监督异常检测模型的对抗攻击。
Sensors (Basel). 2024 Jun 2;24(11):3584. doi: 10.3390/s24113584.
7
Remix: Towards the transferability of adversarial examples.对抗样本的可迁移性研究
Neural Netw. 2023 Jun;163:367-378. doi: 10.1016/j.neunet.2023.04.012. Epub 2023 Apr 18.
8
Boosting the transferability of adversarial examples via stochastic serial attack.通过随机串行攻击提升对抗样本的可转移性。
Neural Netw. 2022 Jun;150:58-67. doi: 10.1016/j.neunet.2022.02.025. Epub 2022 Mar 7.
9
Image classification adversarial attack with improved resizing transformation and ensemble models.基于改进的图像缩放变换和集成模型的图像分类对抗攻击
PeerJ Comput Sci. 2023 Jul 25;9:e1475. doi: 10.7717/peerj-cs.1475. eCollection 2023.
10
EIFDAA: Evaluation of an IDS with function-discarding adversarial attacks in the IIoT.EIFDAA:工业物联网中具有功能丢弃对抗攻击的入侵检测系统评估
Heliyon. 2023 Feb 9;9(2):e13520. doi: 10.1016/j.heliyon.2023.e13520. eCollection 2023 Feb.