• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过随机串行攻击提升对抗样本的可转移性。

Boosting the transferability of adversarial examples via stochastic serial attack.

机构信息

College of Information Sciences and Technology, Donghua University, Shanghai 201620, China; Engineering Research Center of Digitized Textile and Apparel Technology, Ministry of Education, Donghua University, Shanghai 201620, China.

College of Information Sciences and Technology, Donghua University, Shanghai 201620, China; Engineering Research Center of Digitized Textile and Apparel Technology, Ministry of Education, Donghua University, Shanghai 201620, China.

出版信息

Neural Netw. 2022 Jun;150:58-67. doi: 10.1016/j.neunet.2022.02.025. Epub 2022 Mar 7.

DOI:10.1016/j.neunet.2022.02.025
PMID:35305532
Abstract

Deep neural networks (DNNs) are vulnerable to adversarial examples, which are crafted by imposing mild perturbation on clean ones. An intriguing property of adversarial examples is that they are efficient among different DNNs. Thus transfer-based attacks against DNNs become an increasing concern. In this scenario, attackers devise adversarial instances based on the local model without feedback information from the target one. Unfortunately, most existing transfer-based attack methods only employ a single local model to generate adversarial examples. It results in poor transferability because of overfitting to the local model. Although several ensemble attacks have been proposed, the transferability of adversarial examples merely have a slight increase. Meanwhile, these methods need high memory cost during the training process. To this end, we propose a novel attack strategy called stochastic serial attack (SSA). It adopts a serial strategy to attack local models, which reduces memory consumption compared to parallel attacks. Moreover, since local models are stochastically selected from a large model set, SSA can ensure that the adversarial examples do not overfit specific weaknesses of local source models. Extensive experiments on the ImageNet dataset and NeurIPS 2017 adversarial competition dataset show the effectiveness of SSA in improving the transferability of adversarial examples and reducing the memory consumption of the training process.

摘要

深度神经网络 (DNN) 容易受到对抗样本的攻击,这些对抗样本是通过对干净的样本施加轻微的扰动而产生的。对抗样本的一个有趣性质是,它们在不同的 DNN 之间是有效的。因此,基于转移的 DNN 攻击成为一个日益关注的问题。在这种情况下,攻击者会在没有来自目标模型的反馈信息的情况下,基于本地模型设计对抗实例。不幸的是,大多数现有的基于转移的攻击方法仅使用单个本地模型来生成对抗样本。这导致了较差的可转移性,因为它对本地模型过度拟合。尽管已经提出了几种集成攻击方法,但对抗样本的可转移性仅略有提高。同时,这些方法在训练过程中需要高内存成本。为此,我们提出了一种新的攻击策略,称为随机序列攻击 (SSA)。它采用串行策略来攻击本地模型,与并行攻击相比,这降低了内存消耗。此外,由于本地模型是从大型模型集中随机选择的,SSA 可以确保对抗样本不会过度拟合本地源模型的特定弱点。在 ImageNet 数据集和 NeurIPS 2017 对抗竞赛数据集上的广泛实验表明,SSA 可以有效地提高对抗样本的可转移性,并降低训练过程的内存消耗。

相似文献

1
Boosting the transferability of adversarial examples via stochastic serial attack.通过随机串行攻击提升对抗样本的可转移性。
Neural Netw. 2022 Jun;150:58-67. doi: 10.1016/j.neunet.2022.02.025. Epub 2022 Mar 7.
2
Enhancing adversarial attacks with resize-invariant and logical ensemble.利用不变尺寸和逻辑集成增强对抗攻击。
Neural Netw. 2024 May;173:106194. doi: 10.1016/j.neunet.2024.106194. Epub 2024 Feb 20.
3
Remix: Towards the transferability of adversarial examples.对抗样本的可迁移性研究
Neural Netw. 2023 Jun;163:367-378. doi: 10.1016/j.neunet.2023.04.012. Epub 2023 Apr 18.
4
Toward Understanding and Boosting Adversarial Transferability From a Distribution Perspective.从分布角度理解和增强对抗迁移能力。
IEEE Trans Image Process. 2022;31:6487-6501. doi: 10.1109/TIP.2022.3211736. Epub 2022 Oct 21.
5
SMGEA: A New Ensemble Adversarial Attack Powered by Long-Term Gradient Memories.SMGEA:一种由长期梯度记忆驱动的新型集成对抗攻击。
IEEE Trans Neural Netw Learn Syst. 2022 Mar;33(3):1051-1065. doi: 10.1109/TNNLS.2020.3039295. Epub 2022 Feb 28.
6
Adversarial attack vulnerability of medical image analysis systems: Unexplored factors.对抗攻击对医学影像分析系统的漏洞:未知因素。
Med Image Anal. 2021 Oct;73:102141. doi: 10.1016/j.media.2021.102141. Epub 2021 Jun 18.
7
Strengthening transferability of adversarial examples by adaptive inertia and amplitude spectrum dropout.通过自适应惯性和幅度谱丢弃增强对抗样本的可转移性。
Neural Netw. 2023 Aug;165:925-937. doi: 10.1016/j.neunet.2023.06.031. Epub 2023 Jun 30.
8
Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet.通用对抗性攻击对注意力的影响及由此产生的数据集 DAmageNet。
IEEE Trans Pattern Anal Mach Intell. 2022 Apr;44(4):2188-2197. doi: 10.1109/TPAMI.2020.3033291. Epub 2022 Mar 4.
9
Adv-BDPM: Adversarial attack based on Boundary Diffusion Probability Model.Adv-BDPM:基于边界扩散概率模型的对抗攻击。
Neural Netw. 2023 Oct;167:730-740. doi: 10.1016/j.neunet.2023.08.048. Epub 2023 Sep 9.
10
Improving the Transferability of Adversarial Examples With a Noise Data Enhancement Framework and Random Erasing.利用噪声数据增强框架和随机擦除提高对抗样本的可迁移性
Front Neurorobot. 2021 Dec 9;15:784053. doi: 10.3389/fnbot.2021.784053. eCollection 2021.