• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

抵御多个不可预见的对抗性视频。

Defending Against Multiple and Unforeseen Adversarial Videos.

作者信息

Lo Shao-Yuan, Patel Vishal M

出版信息

IEEE Trans Image Process. 2022;31:962-973. doi: 10.1109/TIP.2021.3137648. Epub 2022 Jan 6.

DOI:10.1109/TIP.2021.3137648
PMID:34965207
Abstract

Adversarial robustness of deep neural networks has been actively investigated. However, most existing defense approaches are limited to a specific type of adversarial perturbations. Specifically, they often fail to offer resistance to multiple attack types simultaneously, i.e., they lack multi-perturbation robustness. Furthermore, compared to image recognition problems, the adversarial robustness of video recognition models is relatively unexplored. While several studies have proposed how to generate adversarial videos, only a handful of approaches about defense strategies have been published in the literature. In this paper, we propose one of the first defense strategies against multiple types of adversarial videos for video recognition. The proposed method, referred to as MultiBN, performs adversarial training on multiple adversarial video types using multiple independent batch normalization (BN) layers with a learning-based BN selection module. With a multiple BN structure, each BN brach is responsible for learning the distribution of a single perturbation type and thus provides more precise distribution estimations. This mechanism benefits dealing with multiple perturbation types. The BN selection module detects the attack type of an input video and sends it to the corresponding BN branch, making MultiBN fully automatic and allowing end-to-end training. Compared to present adversarial training approaches, the proposed MultiBN exhibits stronger multi-perturbation robustness against different and even unforeseen adversarial video types, ranging from Lp-bounded attacks and physically realizable attacks. This holds true on different datasets and target models. Moreover, we conduct an extensive analysis to study the properties of the multiple BN structure.

摘要

深度神经网络的对抗鲁棒性已得到积极研究。然而,大多数现有的防御方法仅限于特定类型的对抗扰动。具体而言,它们往往无法同时抵御多种攻击类型,即缺乏多扰动鲁棒性。此外,与图像识别问题相比,视频识别模型的对抗鲁棒性相对较少被探索。虽然有几项研究提出了如何生成对抗视频,但文献中仅发表了少数关于防御策略的方法。在本文中,我们提出了针对视频识别中多种类型对抗视频的首批防御策略之一。所提出的方法称为MultiBN,它使用具有基于学习的BN选择模块的多个独立批归一化(BN)层对多种对抗视频类型进行对抗训练。通过多BN结构,每个BN分支负责学习单一扰动类型的分布,从而提供更精确的分布估计。这种机制有利于处理多种扰动类型。BN选择模块检测输入视频的攻击类型并将其发送到相应的BN分支,使MultiBN完全自动化并允许端到端训练。与现有的对抗训练方法相比,所提出的MultiBN对不同甚至不可预见的对抗视频类型表现出更强的多扰动鲁棒性,包括Lp有界攻击和物理可实现攻击。在不同的数据集和目标模型上都是如此。此外,我们进行了广泛的分析以研究多BN结构的特性。

相似文献

1
Defending Against Multiple and Unforeseen Adversarial Videos.抵御多个不可预见的对抗性视频。
IEEE Trans Image Process. 2022;31:962-973. doi: 10.1109/TIP.2021.3137648. Epub 2022 Jan 6.
2
Enhancing robustness in video recognition models: Sparse adversarial attacks and beyond.增强视频识别模型的鲁棒性:稀疏对抗攻击及其他。
Neural Netw. 2024 Mar;171:127-143. doi: 10.1016/j.neunet.2023.11.056. Epub 2023 Nov 25.
3
Temporal shuffling for defending deep action recognition models against adversarial attacks.对抗攻击下防御深度动作识别模型的时间混淆。
Neural Netw. 2024 Jan;169:388-397. doi: 10.1016/j.neunet.2023.10.033. Epub 2023 Oct 27.
4
Uni-image: Universal image construction for robust neural model.Uni-image:用于稳健神经模型的通用图像构建。
Neural Netw. 2020 Aug;128:279-287. doi: 10.1016/j.neunet.2020.05.018. Epub 2020 May 21.
5
Towards evaluating the robustness of deep diagnostic models by adversarial attack.通过对抗攻击评估深度诊断模型的稳健性。
Med Image Anal. 2021 Apr;69:101977. doi: 10.1016/j.media.2021.101977. Epub 2021 Jan 22.
6
Improving Adversarial Robustness via Attention and Adversarial Logit Pairing.通过注意力机制和对抗性逻辑对配对提高对抗鲁棒性
Front Artif Intell. 2022 Jan 27;4:752831. doi: 10.3389/frai.2021.752831. eCollection 2021.
7
Jointly Defending DeepFake Manipulation and Adversarial Attack Using Decoy Mechanism.使用诱饵机制联合防御深度伪造篡改和对抗攻击。
IEEE Trans Pattern Anal Mach Intell. 2023 Aug;45(8):9922-9931. doi: 10.1109/TPAMI.2023.3253390. Epub 2023 Jun 30.
8
Adversarial Attack and Defense in Deep Ranking.深度排序中的对抗攻击与防御
IEEE Trans Pattern Anal Mach Intell. 2024 Aug;46(8):5306-5324. doi: 10.1109/TPAMI.2024.3365699. Epub 2024 Jul 2.
9
Enhancing adversarial defense for medical image analysis systems with pruning and attention mechanism.利用剪枝和注意力机制增强医学图像分析系统的对抗防御能力。
Med Phys. 2021 Oct;48(10):6198-6212. doi: 10.1002/mp.15208. Epub 2021 Sep 14.
10
Adversarial example defense based on image reconstruction.基于图像重建的对抗样本防御。
PeerJ Comput Sci. 2021 Dec 24;7:e811. doi: 10.7717/peerj-cs.811. eCollection 2021.