• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过自适应惯性和幅度谱丢弃增强对抗样本的可转移性。

Strengthening transferability of adversarial examples by adaptive inertia and amplitude spectrum dropout.

机构信息

School of Electronics and Information Engineering, Soochow University, Suzhou 215006, PR China.

School of Electronics and Information Engineering, Soochow University, Suzhou 215006, PR China.

出版信息

Neural Netw. 2023 Aug;165:925-937. doi: 10.1016/j.neunet.2023.06.031. Epub 2023 Jun 30.

DOI:10.1016/j.neunet.2023.06.031
PMID:37441909
Abstract

Deep neural networks are sensitive to adversarial examples and would produce wrong results with high confidence. However, most existing attack methods exhibit weak transferability, especially for adversarially trained models and defense models. In this paper, two methods are proposed to generate highly transferable adversarial examples, namely Adaptive Inertia Iterative Fast Gradient Sign Method (AdaI-FGSM) and Amplitude Spectrum Dropout Method (ASDM). Specifically, AdaI-FGSM aims to integrate adaptive inertia into the gradient-based attack, and leverage the looking ahead property to search for a flatter maximum, which is essential to strengthen the transferability of adversarial examples. By introducing a loss-preserving transformation in the frequency domain, the proposed ASDM with the dropout invariance property can craft the copies of input images to overcome the poor generalization on the surrogate models. Furthermore, AdaI-FGSM and ASDM can be naturally integrated as an efficient gradient-based attack method to yield more transferable adversarial examples. Extensive experimental results on the ImageNet-compatible dataset demonstrate that higher transferability is achieved by our method than some advanced gradient-based attacks.

摘要

深度神经网络对对抗样本很敏感,并且会以高置信度产生错误的结果。然而,大多数现有的攻击方法表现出较弱的迁移能力,特别是对于对抗训练的模型和防御模型。在本文中,提出了两种生成高度可迁移对抗样本的方法,即自适应惯性迭代快速梯度符号法(AdaI-FGSM)和幅度谱随机失活法(ASDM)。具体来说,AdaI-FGSM 旨在将自适应惯性集成到基于梯度的攻击中,并利用前瞻性属性来搜索更平坦的最大值,这对于增强对抗样本的迁移能力至关重要。通过在频域中引入保损失变换,具有随机失活不变性的所提出的 ASDM 可以制作输入图像的副本,以克服在替代模型上的较差泛化能力。此外,AdaI-FGSM 和 ASDM 可以自然地集成作为一种有效的基于梯度的攻击方法,以产生更具迁移能力的对抗样本。在 ImageNet 兼容数据集上的广泛实验结果表明,我们的方法比一些先进的基于梯度的攻击方法实现了更高的迁移能力。

相似文献

1
Strengthening transferability of adversarial examples by adaptive inertia and amplitude spectrum dropout.通过自适应惯性和幅度谱丢弃增强对抗样本的可转移性。
Neural Netw. 2023 Aug;165:925-937. doi: 10.1016/j.neunet.2023.06.031. Epub 2023 Jun 30.
2
Improving the Transferability of Adversarial Examples With a Noise Data Enhancement Framework and Random Erasing.利用噪声数据增强框架和随机擦除提高对抗样本的可迁移性
Front Neurorobot. 2021 Dec 9;15:784053. doi: 10.3389/fnbot.2021.784053. eCollection 2021.
3
Boosting the transferability of adversarial examples via stochastic serial attack.通过随机串行攻击提升对抗样本的可转移性。
Neural Netw. 2022 Jun;150:58-67. doi: 10.1016/j.neunet.2022.02.025. Epub 2022 Mar 7.
4
Enhancing adversarial attacks with resize-invariant and logical ensemble.利用不变尺寸和逻辑集成增强对抗攻击。
Neural Netw. 2024 May;173:106194. doi: 10.1016/j.neunet.2024.106194. Epub 2024 Feb 20.
5
Gradient Correction for White-Box Adversarial Attacks.白盒对抗攻击的梯度校正
IEEE Trans Neural Netw Learn Syst. 2024 Dec;35(12):18419-18430. doi: 10.1109/TNNLS.2023.3315414. Epub 2024 Dec 2.
6
Robustifying Deep Networks for Medical Image Segmentation.稳健化深度网络在医学图像分割中的应用。
J Digit Imaging. 2021 Oct;34(5):1279-1293. doi: 10.1007/s10278-021-00507-5. Epub 2021 Sep 20.
7
Adversarial Attacks against Deep-Learning-Based Automatic Dependent Surveillance-Broadcast Unsupervised Anomaly Detection Models in the Context of Air Traffic Management.空中交通管理背景下针对基于深度学习的自动相关监视广播无监督异常检测模型的对抗攻击。
Sensors (Basel). 2024 Jun 2;24(11):3584. doi: 10.3390/s24113584.
8
Remix: Towards the transferability of adversarial examples.对抗样本的可迁移性研究
Neural Netw. 2023 Jun;163:367-378. doi: 10.1016/j.neunet.2023.04.012. Epub 2023 Apr 18.
9
Diffusion Models for Imperceptible and Transferable Adversarial Attack.用于不可察觉和可转移对抗攻击的扩散模型
IEEE Trans Pattern Anal Mach Intell. 2025 Feb;47(2):961-977. doi: 10.1109/TPAMI.2024.3480519. Epub 2025 Jan 9.
10
DEFEAT: Decoupled feature attack across deep neural networks.击败:跨深度神经网络的解耦特征攻击。
Neural Netw. 2022 Dec;156:13-28. doi: 10.1016/j.neunet.2022.09.009. Epub 2022 Sep 20.