Suppr超能文献

SMGEA:一种由长期梯度记忆驱动的新型集成对抗攻击。

SMGEA: A New Ensemble Adversarial Attack Powered by Long-Term Gradient Memories.

作者信息

Che Zhaohui, Borji Ali, Zhai Guangtao, Ling Suiyi, Li Jing, Min Xiongkuo, Guo Guodong, Le Callet Patrick

出版信息

IEEE Trans Neural Netw Learn Syst. 2022 Mar;33(3):1051-1065. doi: 10.1109/TNNLS.2020.3039295. Epub 2022 Feb 28.

Abstract

Deep neural networks are vulnerable to adversarial attacks. More importantly, some adversarial examples crafted against an ensemble of source models transfer to other target models and, thus, pose a security threat to black-box applications (when attackers have no access to the target models). Current transfer-based ensemble attacks, however, only consider a limited number of source models to craft an adversarial example and, thus, obtain poor transferability. Besides, recent query-based black-box attacks, which require numerous queries to the target model, not only come under suspicion by the target model but also cause expensive query cost. In this article, we propose a novel transfer-based black-box attack, dubbed serial-minigroup-ensemble-attack (SMGEA). Concretely, SMGEA first divides a large number of pretrained white-box source models into several "minigroups." For each minigroup, we design three new ensemble strategies to improve the intragroup transferability. Moreover, we propose a new algorithm that recursively accumulates the "long-term" gradient memories of the previous minigroup to the subsequent minigroup. This way, the learned adversarial information can be preserved, and the intergroup transferability can be improved. Experiments indicate that SMGEA not only achieves state-of-the-art black-box attack ability over several data sets but also deceives two online black-box saliency prediction systems in real world, i.e., DeepGaze-II (https://deepgaze.bethgelab.org/) and SALICON (http://salicon.net/demo/). Finally, we contribute a new code repository to promote research on adversarial attack and defense over ubiquitous pixel-to-pixel computer vision tasks. We share our code together with the pretrained substitute model zoo at https://github.com/CZHQuality/AAA-Pix2pix.

摘要

深度神经网络容易受到对抗性攻击。更重要的是,一些针对源模型集合精心制作的对抗样本会转移到其他目标模型上,从而对黑盒应用程序构成安全威胁(当攻击者无法访问目标模型时)。然而,当前基于转移的集成攻击仅考虑有限数量的源模型来制作对抗样本,因此可转移性较差。此外,最近基于查询的黑盒攻击需要对目标模型进行大量查询,这不仅会引起目标模型的怀疑,还会导致高昂的查询成本。在本文中,我们提出了一种新颖的基于转移的黑盒攻击方法,称为串行最小组集成攻击(SMGEA)。具体来说,SMGEA首先将大量预训练的白盒源模型划分为几个“最小组”。对于每个最小组,我们设计了三种新的集成策略来提高组内可转移性。此外,我们提出了一种新算法,该算法将前一个最小组的“长期”梯度记忆递归地累积到后续最小组中。通过这种方式,可以保留学到的对抗信息,并提高组间可转移性。实验表明,SMGEA不仅在多个数据集上实现了领先的黑盒攻击能力,还在现实世界中欺骗了两个在线黑盒显著性预测系统,即DeepGaze-II(https://deepgaze.bethgelab.org/)和SALICON(http://salicon.net/demo/)。最后,我们贡献了一个新的代码库,以促进在无处不在的逐像素计算机视觉任务上的对抗攻击和防御研究。我们在https://github.com/CZHQuality/AAA-Pix2pix上分享了我们的代码以及预训练替代模型库。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验