• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于改进的图像缩放变换和集成模型的图像分类对抗攻击

Image classification adversarial attack with improved resizing transformation and ensemble models.

作者信息

Li Chenwei, Zhang Hengwei, Yang Bo, Wang Jindong

机构信息

State Key Laboratory of Mathematical Engineering and Advanced Computing, Zhengzhou, Henan, China.

Henan Key Laboratory of Information Security, Zhengzhou, Henan, China.

出版信息

PeerJ Comput Sci. 2023 Jul 25;9:e1475. doi: 10.7717/peerj-cs.1475. eCollection 2023.

DOI:10.7717/peerj-cs.1475
PMID:37547405
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10403174/
Abstract

Convolutional neural networks have achieved great success in computer vision, but incorrect predictions would be output when applying intended perturbations on original input. These human-indistinguishable replicas are called adversarial examples, which on this feature can be used to evaluate network robustness and security. White-box attack success rate is considerable, when already knowing network structure and parameters. But in a black-box attack, the adversarial examples success rate is relatively low and the transferability remains to be improved. This article refers to model augmentation which is derived from data augmentation in training generalizable neural networks, and proposes resizing invariance method. The proposed method introduces improved resizing transformation to achieve model augmentation. In addition, ensemble models are used to generate more transferable adversarial examples. Extensive experiments verify the better performance of this method in comparison to other baseline methods including the original model augmentation method, and the black-box attack success rate is improved on both the normal models and defense models.

摘要

卷积神经网络在计算机视觉领域取得了巨大成功,但在对原始输入应用特定扰动时会输出错误预测。这些人类难以区分的副本被称为对抗样本,基于此特性可用于评估网络的鲁棒性和安全性。在已知网络结构和参数的情况下,白盒攻击成功率相当可观。但在黑盒攻击中,对抗样本的成功率相对较低,其可迁移性仍有待提高。本文借鉴了在训练可泛化神经网络时从数据增强衍生而来的模型增强方法,并提出了尺寸不变性方法。该方法引入改进的尺寸变换以实现模型增强。此外,使用集成模型来生成更多可迁移的对抗样本。大量实验验证了该方法与包括原始模型增强方法在内的其他基线方法相比具有更好的性能,并且在正常模型和防御模型上黑盒攻击成功率均有所提高。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/869a/10403174/edaeb3d4f078/peerj-cs-09-1475-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/869a/10403174/621f5cae8809/peerj-cs-09-1475-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/869a/10403174/f56453ef2b48/peerj-cs-09-1475-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/869a/10403174/2bb50bc7689c/peerj-cs-09-1475-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/869a/10403174/e3f71cffbc35/peerj-cs-09-1475-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/869a/10403174/1857c4e00512/peerj-cs-09-1475-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/869a/10403174/7f2985dd83a7/peerj-cs-09-1475-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/869a/10403174/0d75c1793355/peerj-cs-09-1475-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/869a/10403174/edaeb3d4f078/peerj-cs-09-1475-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/869a/10403174/621f5cae8809/peerj-cs-09-1475-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/869a/10403174/f56453ef2b48/peerj-cs-09-1475-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/869a/10403174/2bb50bc7689c/peerj-cs-09-1475-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/869a/10403174/e3f71cffbc35/peerj-cs-09-1475-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/869a/10403174/1857c4e00512/peerj-cs-09-1475-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/869a/10403174/7f2985dd83a7/peerj-cs-09-1475-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/869a/10403174/0d75c1793355/peerj-cs-09-1475-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/869a/10403174/edaeb3d4f078/peerj-cs-09-1475-g008.jpg

相似文献

1
Image classification adversarial attack with improved resizing transformation and ensemble models.基于改进的图像缩放变换和集成模型的图像分类对抗攻击
PeerJ Comput Sci. 2023 Jul 25;9:e1475. doi: 10.7717/peerj-cs.1475. eCollection 2023.
2
Remix: Towards the transferability of adversarial examples.对抗样本的可迁移性研究
Neural Netw. 2023 Jun;163:367-378. doi: 10.1016/j.neunet.2023.04.012. Epub 2023 Apr 18.
3
Adversarial robustness in deep neural networks based on variable attributes of the stochastic ensemble model.基于随机集成模型可变属性的深度神经网络对抗鲁棒性。
Front Neurorobot. 2023 Aug 8;17:1205370. doi: 10.3389/fnbot.2023.1205370. eCollection 2023.
4
Improving the Transferability of Adversarial Examples With a Noise Data Enhancement Framework and Random Erasing.利用噪声数据增强框架和随机擦除提高对抗样本的可迁移性
Front Neurorobot. 2021 Dec 9;15:784053. doi: 10.3389/fnbot.2021.784053. eCollection 2021.
5
Adv-BDPM: Adversarial attack based on Boundary Diffusion Probability Model.Adv-BDPM:基于边界扩散概率模型的对抗攻击。
Neural Netw. 2023 Oct;167:730-740. doi: 10.1016/j.neunet.2023.08.048. Epub 2023 Sep 9.
6
SMGEA: A New Ensemble Adversarial Attack Powered by Long-Term Gradient Memories.SMGEA:一种由长期梯度记忆驱动的新型集成对抗攻击。
IEEE Trans Neural Netw Learn Syst. 2022 Mar;33(3):1051-1065. doi: 10.1109/TNNLS.2020.3039295. Epub 2022 Feb 28.
7
Enhancing adversarial attacks with resize-invariant and logical ensemble.利用不变尺寸和逻辑集成增强对抗攻击。
Neural Netw. 2024 May;173:106194. doi: 10.1016/j.neunet.2024.106194. Epub 2024 Feb 20.
8
Erosion Attack: Harnessing Corruption To Improve Adversarial Examples.侵蚀攻击:利用腐败来改进对抗样本。
IEEE Trans Image Process. 2023;32:4828-4841. doi: 10.1109/TIP.2023.3251719. Epub 2023 Aug 29.
9
Boosting the transferability of adversarial examples via stochastic serial attack.通过随机串行攻击提升对抗样本的可转移性。
Neural Netw. 2022 Jun;150:58-67. doi: 10.1016/j.neunet.2022.02.025. Epub 2022 Mar 7.
10
Towards Transferable Adversarial Attacks on Image and Video Transformers.面向图像和视频Transformer的可迁移对抗攻击
IEEE Trans Image Process. 2023;32:6346-6358. doi: 10.1109/TIP.2023.3331582. Epub 2023 Nov 20.

本文引用的文献

1
Deep Transfer Learning for Land Use and Land Cover Classification: A Comparative Study.深度学习在土地利用和土地覆盖分类中的应用:一项比较研究。
Sensors (Basel). 2021 Dec 3;21(23):8083. doi: 10.3390/s21238083.
2
Research on image classification method based on improved multi-scale relational network.基于改进多尺度关系网络的图像分类方法研究
PeerJ Comput Sci. 2021 Jul 21;7:e613. doi: 10.7717/peerj-cs.613. eCollection 2021.
3
ECOVNet: a highly effective ensemble based deep learning model for detecting COVID-19.ECOVNet:一种用于检测新型冠状病毒肺炎的高效基于集成的深度学习模型。
PeerJ Comput Sci. 2021 May 26;7:e551. doi: 10.7717/peerj-cs.551. eCollection 2021.
4
Data augmentation based malware detection using convolutional neural networks.基于数据增强的卷积神经网络恶意软件检测
PeerJ Comput Sci. 2021 Jan 22;7:e346. doi: 10.7717/peerj-cs.346. eCollection 2021.
5
Review of deep learning: concepts, CNN architectures, challenges, applications, future directions.深度学习综述:概念、卷积神经网络架构、挑战、应用及未来方向。
J Big Data. 2021;8(1):53. doi: 10.1186/s40537-021-00444-8. Epub 2021 Mar 31.