• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

利用不变尺寸和逻辑集成增强对抗攻击。

Enhancing adversarial attacks with resize-invariant and logical ensemble.

机构信息

School of Computer and Software, Nanyang Institute of Technology, Nanyang, 473000, China.

School of Computer and Communication Engineering, Zhengzhou University of Light Industry, Zhengzhou, 450002, China.

出版信息

Neural Netw. 2024 May;173:106194. doi: 10.1016/j.neunet.2024.106194. Epub 2024 Feb 20.

DOI:10.1016/j.neunet.2024.106194
PMID:38402809
Abstract

In black-box scenarios, most transfer-based attacks usually improve the transferability of adversarial examples by optimizing the gradient calculation of the input image. Unfortunately, since the gradient information is only calculated and optimized for each pixel point in the image individually, the generated adversarial examples tend to overfit the local model and have poor transferability to the target model. To tackle the issue, we propose a resize-invariant method (RIM) and a logical ensemble transformation method (LETM) to enhance the transferability of adversarial examples. Specifically, RIM is inspired by the resize-invariant property of Deep Neural Networks (DNNs). The range of resizable pixel is first divided into multiple intervals, and then the input image is randomly resized and padded within each interval. Finally, LETM performs logical ensemble of multiple images after RIM transformation to calculate the final gradient update direction. The proposed method adequately considers the information of each pixel in the image and the surrounding pixels. The probability of duplication of image transformations is minimized and the overfitting effect of adversarial examples is effectively mitigated. Numerous experiments on the ImageNet dataset show that our approach outperforms other advanced methods and is capable of generating more transferable adversarial examples.

摘要

在黑盒场景中,大多数基于迁移的攻击通常通过优化输入图像的梯度计算来提高对抗样本的可迁移性。不幸的是,由于梯度信息仅针对图像中的每个像素点分别进行计算和优化,因此生成的对抗样本往往会过度拟合局部模型,对目标模型的可迁移性较差。为了解决这个问题,我们提出了一种不变大小方法(RIM)和一种逻辑集成变换方法(LETM)来增强对抗样本的可迁移性。具体来说,RIM 受到深度神经网络(DNN)不变大小特性的启发。首先将可调整大小的像素范围划分为多个区间,然后在每个区间内随机调整输入图像的大小并进行填充。最后,LETM 在 RIM 变换后对多个图像进行逻辑集成,以计算最终的梯度更新方向。所提出的方法充分考虑了图像中每个像素及其周围像素的信息。最小化了图像变换的重复概率,并有效减轻了对抗样本的过拟合效应。在 ImageNet 数据集上进行的大量实验表明,我们的方法优于其他先进方法,能够生成更具可迁移性的对抗样本。

相似文献

1
Enhancing adversarial attacks with resize-invariant and logical ensemble.利用不变尺寸和逻辑集成增强对抗攻击。
Neural Netw. 2024 May;173:106194. doi: 10.1016/j.neunet.2024.106194. Epub 2024 Feb 20.
2
Boosting the transferability of adversarial examples via stochastic serial attack.通过随机串行攻击提升对抗样本的可转移性。
Neural Netw. 2022 Jun;150:58-67. doi: 10.1016/j.neunet.2022.02.025. Epub 2022 Mar 7.
3
Remix: Towards the transferability of adversarial examples.对抗样本的可迁移性研究
Neural Netw. 2023 Jun;163:367-378. doi: 10.1016/j.neunet.2023.04.012. Epub 2023 Apr 18.
4
Strengthening transferability of adversarial examples by adaptive inertia and amplitude spectrum dropout.通过自适应惯性和幅度谱丢弃增强对抗样本的可转移性。
Neural Netw. 2023 Aug;165:925-937. doi: 10.1016/j.neunet.2023.06.031. Epub 2023 Jun 30.
5
SMGEA: A New Ensemble Adversarial Attack Powered by Long-Term Gradient Memories.SMGEA:一种由长期梯度记忆驱动的新型集成对抗攻击。
IEEE Trans Neural Netw Learn Syst. 2022 Mar;33(3):1051-1065. doi: 10.1109/TNNLS.2020.3039295. Epub 2022 Feb 28.
6
Toward Understanding and Boosting Adversarial Transferability From a Distribution Perspective.从分布角度理解和增强对抗迁移能力。
IEEE Trans Image Process. 2022;31:6487-6501. doi: 10.1109/TIP.2022.3211736. Epub 2022 Oct 21.
7
Adaptive Cross-Modal Transferable Adversarial Attacks From Images to Videos.从图像到视频的自适应跨模态可转移对抗攻击
IEEE Trans Pattern Anal Mach Intell. 2024 May;46(5):3772-3783. doi: 10.1109/TPAMI.2023.3347835. Epub 2024 Apr 3.
8
Improving the Transferability of Adversarial Examples With a Noise Data Enhancement Framework and Random Erasing.利用噪声数据增强框架和随机擦除提高对抗样本的可迁移性
Front Neurorobot. 2021 Dec 9;15:784053. doi: 10.3389/fnbot.2021.784053. eCollection 2021.
9
Image classification adversarial attack with improved resizing transformation and ensemble models.基于改进的图像缩放变换和集成模型的图像分类对抗攻击
PeerJ Comput Sci. 2023 Jul 25;9:e1475. doi: 10.7717/peerj-cs.1475. eCollection 2023.
10
Towards Transferable Adversarial Attacks on Image and Video Transformers.面向图像和视频Transformer的可迁移对抗攻击
IEEE Trans Image Process. 2023;32:6346-6358. doi: 10.1109/TIP.2023.3331582. Epub 2023 Nov 20.