Suppr超能文献

通过随机串行攻击提升对抗样本的可转移性。

Boosting the transferability of adversarial examples via stochastic serial attack.

机构信息

College of Information Sciences and Technology, Donghua University, Shanghai 201620, China; Engineering Research Center of Digitized Textile and Apparel Technology, Ministry of Education, Donghua University, Shanghai 201620, China.

College of Information Sciences and Technology, Donghua University, Shanghai 201620, China; Engineering Research Center of Digitized Textile and Apparel Technology, Ministry of Education, Donghua University, Shanghai 201620, China.

出版信息

Neural Netw. 2022 Jun;150:58-67. doi: 10.1016/j.neunet.2022.02.025. Epub 2022 Mar 7.

Abstract

Deep neural networks (DNNs) are vulnerable to adversarial examples, which are crafted by imposing mild perturbation on clean ones. An intriguing property of adversarial examples is that they are efficient among different DNNs. Thus transfer-based attacks against DNNs become an increasing concern. In this scenario, attackers devise adversarial instances based on the local model without feedback information from the target one. Unfortunately, most existing transfer-based attack methods only employ a single local model to generate adversarial examples. It results in poor transferability because of overfitting to the local model. Although several ensemble attacks have been proposed, the transferability of adversarial examples merely have a slight increase. Meanwhile, these methods need high memory cost during the training process. To this end, we propose a novel attack strategy called stochastic serial attack (SSA). It adopts a serial strategy to attack local models, which reduces memory consumption compared to parallel attacks. Moreover, since local models are stochastically selected from a large model set, SSA can ensure that the adversarial examples do not overfit specific weaknesses of local source models. Extensive experiments on the ImageNet dataset and NeurIPS 2017 adversarial competition dataset show the effectiveness of SSA in improving the transferability of adversarial examples and reducing the memory consumption of the training process.

摘要

深度神经网络 (DNN) 容易受到对抗样本的攻击,这些对抗样本是通过对干净的样本施加轻微的扰动而产生的。对抗样本的一个有趣性质是,它们在不同的 DNN 之间是有效的。因此,基于转移的 DNN 攻击成为一个日益关注的问题。在这种情况下,攻击者会在没有来自目标模型的反馈信息的情况下,基于本地模型设计对抗实例。不幸的是,大多数现有的基于转移的攻击方法仅使用单个本地模型来生成对抗样本。这导致了较差的可转移性,因为它对本地模型过度拟合。尽管已经提出了几种集成攻击方法,但对抗样本的可转移性仅略有提高。同时,这些方法在训练过程中需要高内存成本。为此,我们提出了一种新的攻击策略,称为随机序列攻击 (SSA)。它采用串行策略来攻击本地模型,与并行攻击相比,这降低了内存消耗。此外,由于本地模型是从大型模型集中随机选择的,SSA 可以确保对抗样本不会过度拟合本地源模型的特定弱点。在 ImageNet 数据集和 NeurIPS 2017 对抗竞赛数据集上的广泛实验表明,SSA 可以有效地提高对抗样本的可转移性,并降低训练过程的内存消耗。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验