• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过变换图像组件交换来生成对抗性扰动

Crafting Adversarial Perturbations via Transformed Image Component Swapping.

作者信息

Agarwal Akshay, Ratha Nalini, Vatsa Mayank, Singh Richa

出版信息

IEEE Trans Image Process. 2022;31:7338-7349. doi: 10.1109/TIP.2022.3204206. Epub 2022 Nov 30.

DOI:10.1109/TIP.2022.3204206
PMID:36094979
Abstract

Adversarial attacks have been demonstrated to fool the deep classification networks. There are two key characteristics of these attacks: firstly, these perturbations are mostly additive noises carefully crafted from the deep neural network itself. Secondly, the noises are added to the whole image, not considering them as the combination of multiple components from which they are made. Motivated by these observations, in this research, we first study the role of various image components and the impact of these components on the classification of the images. These manipulations do not require the knowledge of the networks and external noise to function effectively and hence have the potential to be one of the most practical options for real-world attacks. Based on the significance of the particular image components, we also propose a transferable adversarial attack against unseen deep networks. The proposed attack utilizes the projected gradient descent strategy to add the adversarial perturbation to the manipulated component image. The experiments are conducted on a wide range of networks and four databases including ImageNet and CIFAR-100. The experiments show that the proposed attack achieved better transferability and hence gives an upper hand to an attacker. On the ImageNet database, the success rate of the proposed attack is up to 88.5%, while the current state-of-the-art attack success rate on the database is 53.8%. We have further tested the resiliency of the attack against one of the most successful defenses namely adversarial training to measure its strength. The comparison with several challenging attacks shows that: (i) the proposed attack has a higher transferability rate against multiple unseen networks and (ii) it is hard to mitigate its impact. We claim that based on the understanding of the image components, the proposed research has been able to identify a newer adversarial attack unseen so far and unsolvable using the current defense mechanisms.

摘要

对抗攻击已被证明能骗过深度分类网络。这些攻击有两个关键特征:其一,这些扰动大多是从深度神经网络本身精心构造的加性噪声。其二,噪声被添加到整个图像上,而不是将其视为由多个组成部分构成的组合。受这些观察结果的启发,在本研究中,我们首先研究各种图像组件的作用以及这些组件对图像分类的影响。这些操作无需网络知识和外部噪声就能有效发挥作用,因此有可能成为现实世界攻击中最实用的选择之一。基于特定图像组件的重要性,我们还提出了一种针对未见深度网络的可迁移对抗攻击。所提出的攻击利用投影梯度下降策略将对抗扰动添加到经过处理的组件图像上。实验在包括ImageNet和CIFAR - 100在内的广泛网络和四个数据库上进行。实验表明,所提出的攻击实现了更好的可迁移性,从而使攻击者占据上风。在ImageNet数据库上,所提出攻击的成功率高达88.5%,而该数据库当前最先进的攻击成功率为53.8%。我们进一步测试了该攻击对最成功的防御方法之一即对抗训练的弹性,以衡量其强度。与几种具有挑战性的攻击的比较表明:(i)所提出的攻击对多个未见网络具有更高的可迁移率,(ii)很难减轻其影响。我们声称,基于对图像组件的理解,本研究所识别出的一种新型对抗攻击是目前尚未见过且使用当前防御机制无法解决的。

相似文献

1
Crafting Adversarial Perturbations via Transformed Image Component Swapping.通过变换图像组件交换来生成对抗性扰动
IEEE Trans Image Process. 2022;31:7338-7349. doi: 10.1109/TIP.2022.3204206. Epub 2022 Nov 30.
2
Enhancing robustness in video recognition models: Sparse adversarial attacks and beyond.增强视频识别模型的鲁棒性:稀疏对抗攻击及其他。
Neural Netw. 2024 Mar;171:127-143. doi: 10.1016/j.neunet.2023.11.056. Epub 2023 Nov 25.
3
Robust image classification against adversarial attacks using elastic similarity measures between edge count sequences.使用边缘计数序列之间的弹性相似性度量来进行对抗攻击的鲁棒图像分类。
Neural Netw. 2020 Aug;128:61-72. doi: 10.1016/j.neunet.2020.04.030. Epub 2020 Apr 30.
4
Adv-BDPM: Adversarial attack based on Boundary Diffusion Probability Model.Adv-BDPM:基于边界扩散概率模型的对抗攻击。
Neural Netw. 2023 Oct;167:730-740. doi: 10.1016/j.neunet.2023.08.048. Epub 2023 Sep 9.
5
DEFEAT: Decoupled feature attack across deep neural networks.击败:跨深度神经网络的解耦特征攻击。
Neural Netw. 2022 Dec;156:13-28. doi: 10.1016/j.neunet.2022.09.009. Epub 2022 Sep 20.
6
Adversarial attack vulnerability of medical image analysis systems: Unexplored factors.对抗攻击对医学影像分析系统的漏洞:未知因素。
Med Image Anal. 2021 Oct;73:102141. doi: 10.1016/j.media.2021.102141. Epub 2021 Jun 18.
7
Boosting the transferability of adversarial examples via stochastic serial attack.通过随机串行攻击提升对抗样本的可转移性。
Neural Netw. 2022 Jun;150:58-67. doi: 10.1016/j.neunet.2022.02.025. Epub 2022 Mar 7.
8
ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers.ABC攻击:一种用于欺骗深度图像分类器的无梯度优化黑盒攻击。
Entropy (Basel). 2022 Mar 15;24(3):412. doi: 10.3390/e24030412.
9
DAMAD: Database, Attack, and Model Agnostic Adversarial Perturbation Detector.DAMAD:数据库、攻击与模型无关的对抗扰动检测器。
IEEE Trans Neural Netw Learn Syst. 2022 Aug;33(8):3277-3289. doi: 10.1109/TNNLS.2021.3051529. Epub 2022 Aug 3.
10
Image Super-Resolution as a Defense Against Adversarial Attacks.图像超分辨率作为对抗对抗攻击的一种防御手段。
IEEE Trans Image Process. 2019 Sep 19. doi: 10.1109/TIP.2019.2940533.