Suppr超能文献

在不指定目标模型的情况下生成对抗样本。

Generating adversarial examples without specifying a target model.

作者信息

Yang Gaoming, Li Mingwei, Fang Xianjing, Zhang Ji, Liang Xingzhu

机构信息

School of Computer Science and Engineering, Anhui University of Science and Technology, Huainan, China.

Department of Mathematics and Computing, University of Southern Queensland, Queensland, Australia.

出版信息

PeerJ Comput Sci. 2021 Sep 13;7:e702. doi: 10.7717/peerj-cs.702. eCollection 2021.

Abstract

Adversarial examples are regarded as a security threat to deep learning models, and there are many ways to generate them. However, most existing methods require the query authority of the target during their work. In a more practical situation, the attacker will be easily detected because of too many queries, and this problem is especially obvious under the black-box setting. To solve the problem, we propose the Attack Without a Target Model (AWTM). Our algorithm does not specify any target model in generating adversarial examples, so it does not need to query the target. Experimental results show that it achieved a maximum attack success rate of 81.78% in the MNIST data set and 87.99% in the CIFAR-10 data set. In addition, it has a low time cost because it is a GAN-based method.

摘要

对抗样本被视为深度学习模型的一种安全威胁,并且有许多生成它们的方法。然而,大多数现有方法在工作过程中需要目标的查询权限。在更实际的情况下,由于查询过多,攻击者很容易被检测到,并且这个问题在黑盒设置下尤为明显。为了解决这个问题,我们提出了无目标模型攻击(AWTM)。我们的算法在生成对抗样本时不指定任何目标模型,因此不需要查询目标。实验结果表明,它在MNIST数据集中实现了81.78%的最大攻击成功率,在CIFAR-10数据集中实现了87.99%的最大攻击成功率。此外,由于它是一种基于生成对抗网络(GAN)的方法,所以时间成本较低。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b280/8459786/52b0f71f72c6/peerj-cs-07-702-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验