Suppr超能文献

用于多任务对抗攻击的梯度锐化注意力干扰

Attention distraction with gradient sharpening for multi-task adversarial attack.

作者信息

Liu Bingyu, Hu Jiani, Deng Weihong

机构信息

Pattern Recognition and Intelligent System Laboratory, School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing 100876, China.

出版信息

Math Biosci Eng. 2023 Jun 14;20(8):13562-13580. doi: 10.3934/mbe.2023605.

Abstract

The advancement of deep learning has resulted in significant improvements on various visual tasks. However, deep neural networks (DNNs) have been found to be vulnerable to well-designed adversarial examples, which can easily deceive DNNs by adding visually imperceptible perturbations to original clean data. Prior research on adversarial attack methods mainly focused on single-task settings, i.e., generating adversarial examples to fool networks with a specific task. However, real-world artificial intelligence systems often require solving multiple tasks simultaneously. In such multi-task situations, the single-task adversarial attacks will have poor attack performance on the unrelated tasks. To address this issue, the generation of multi-task adversarial examples should leverage the generalization knowledge among multiple tasks and reduce the impact of task-specific information during the generation process. In this study, we propose a multi-task adversarial attack method to generate adversarial examples from a multi-task learning network by applying attention distraction with gradient sharpening. Specifically, we first attack the attention heat maps, which contain more generalization information than feature representations, by distracting the attention on the attack regions. Additionally, we use gradient-based adversarial example-generating schemes and propose to sharpen the gradients so that the gradients with multi-task information rather than only task-specific information can make a greater impact. Experimental results on the NYUD-V2 and PASCAL datasets demonstrate that the proposed method can improve the generalization ability of adversarial examples among multiple tasks and achieve better attack performance.

摘要

深度学习的发展已在各种视觉任务上带来了显著改进。然而,人们发现深度神经网络(DNN)容易受到精心设计的对抗样本的影响,这些对抗样本可以通过向原始干净数据添加视觉上难以察觉的扰动来轻易欺骗DNN。先前关于对抗攻击方法的研究主要集中在单任务设置上,即生成对抗样本以欺骗执行特定任务的网络。然而,现实世界中的人工智能系统通常需要同时解决多个任务。在这种多任务情况下,单任务对抗攻击在不相关任务上的攻击性能会很差。为了解决这个问题,多任务对抗样本的生成应该利用多个任务之间的泛化知识,并在生成过程中减少特定任务信息的影响。在本研究中,我们提出了一种多任务对抗攻击方法,通过应用带有梯度锐化的注意力分散,从多任务学习网络生成对抗样本。具体来说,我们首先通过分散对攻击区域的注意力来攻击注意力热图,注意力热图比特征表示包含更多的泛化信息。此外,我们使用基于梯度的对抗样本生成方案,并提出锐化梯度,以便带有多任务信息而非仅特定任务信息的梯度能产生更大影响。在NYUD-V2和PASCAL数据集上的实验结果表明,所提出的方法可以提高对抗样本在多个任务之间的泛化能力,并实现更好的攻击性能。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验