• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

T-BFA:靶向位翻转对抗权重攻击。

T-BFA: Targeted Bit-Flip Adversarial Weight Attack.

作者信息

Rakin Adnan Siraj, He Zhezhi, Li Jingtao, Yao Fan, Chakrabarti Chaitali, Fan Deliang

出版信息

IEEE Trans Pattern Anal Mach Intell. 2021 Sep 16;PP. doi: 10.1109/TPAMI.2021.3112932.

DOI:10.1109/TPAMI.2021.3112932
PMID:34529561
Abstract

Traditional Deep Neural Network (DNN) security is mostly related to the well-known adversarial input example attack.Recently, another dimension of adversarial attack, namely, attack on DNN weight parameters, has been shown to be very powerful. Asa representative one, the Bit-Flip based adversarial weight Attack (BFA) injects an extremely small amount of faults into weight parameters to hijack the executing DNN function. Prior works of BFA focus on un-targeted attacks that can hack all inputs into a random output class by flipping a very small number of weight bits stored in computer memory. This paper proposes the first work oftargetedBFA based (T-BFA) adversarial weight attack on DNNs, which can intentionally mislead selected inputs to a target output class. The objective is achieved by identifying the weight bits that are highly associated with classification of a targeted output through a class-dependent weight bit searching algorithm. Our proposed T-BFA performance is successfully demonstrated on multiple DNN architectures for image classification tasks. For example, by merely flipping 27 out of 88 million weight bits of ResNet-18, our T-BFA can misclassify all the images from Hen class into Goose class (i.e., 100% attack success rate) in ImageNet dataset, while maintaining 59.35% validation accuracy.

摘要

传统深度神经网络(DNN)的安全性大多与广为人知的对抗性输入示例攻击有关。最近,对抗性攻击的另一个维度,即对DNN权重参数的攻击,已被证明非常有效。作为一个代表性的攻击方式,基于位翻转的对抗性权重攻击(BFA)向权重参数中注入极少的故障,以劫持正在执行的DNN函数。BFA的先前工作主要集中在非针对性攻击上,即通过翻转存储在计算机内存中的极少数权重位,将所有输入黑客攻击到一个随机的输出类别。本文提出了第一项基于针对性BFA(T-BFA)的DNN对抗性权重攻击工作,它可以故意将选定的输入误导到目标输出类别。通过一种依赖于类别的权重位搜索算法,识别与目标输出分类高度相关的权重位,从而实现这一目标。我们提出的T-BFA性能在用于图像分类任务的多个DNN架构上得到了成功验证。例如,在ImageNet数据集中,通过仅仅翻转ResNet-18的8800万个权重位中的27个,我们的T-BFA就可以将所有来自母鸡类别的图像误分类为鹅类别(即100%的攻击成功率),同时保持59.35%的验证准确率。

相似文献

1
T-BFA: Targeted Bit-Flip Adversarial Weight Attack.T-BFA:靶向位翻转对抗权重攻击。
IEEE Trans Pattern Anal Mach Intell. 2021 Sep 16;PP. doi: 10.1109/TPAMI.2021.3112932.
2
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators.随机和对抗性比特错误鲁棒性:节能且安全的深度神经网络加速器
IEEE Trans Pattern Anal Mach Intell. 2023 Mar;45(3):3632-3647. doi: 10.1109/TPAMI.2022.3181972.
3
Universal adversarial attacks on deep neural networks for medical image classification.针对医学图像分类的深度神经网络的通用对抗攻击。
BMC Med Imaging. 2021 Jan 7;21(1):9. doi: 10.1186/s12880-020-00530-y.
4
When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time.在测试时不对分类进行分类:DNN 分类器的攻击异常检测(ADA)。
Neural Comput. 2019 Aug;31(8):1624-1670. doi: 10.1162/neco_a_01209. Epub 2019 Jul 1.
5
Vulnerability of deep neural networks for detecting COVID-19 cases from chest X-ray images to universal adversarial attacks.深度神经网络在从胸部 X 光图像检测 COVID-19 病例方面对通用对抗攻击的脆弱性。
PLoS One. 2020 Dec 17;15(12):e0243963. doi: 10.1371/journal.pone.0243963. eCollection 2020.
6
Versatile Weight Attack via Flipping Limited Bits.通过翻转有限位实现的通用权重攻击
IEEE Trans Pattern Anal Mach Intell. 2023 Nov;45(11):13653-13665. doi: 10.1109/TPAMI.2023.3296408. Epub 2023 Oct 3.
7
Perturbing BEAMs: EEG adversarial attack to deep learning models for epilepsy diagnosing.干扰 BEAMs:用于癫痫诊断的深度学习模型的 EEG 对抗攻击。
BMC Med Inform Decis Mak. 2023 Jul 6;23(1):115. doi: 10.1186/s12911-023-02212-5.
8
Boosting the transferability of adversarial examples via stochastic serial attack.通过随机串行攻击提升对抗样本的可转移性。
Neural Netw. 2022 Jun;150:58-67. doi: 10.1016/j.neunet.2022.02.025. Epub 2022 Mar 7.
9
Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet.通用对抗性攻击对注意力的影响及由此产生的数据集 DAmageNet。
IEEE Trans Pattern Anal Mach Intell. 2022 Apr;44(4):2188-2197. doi: 10.1109/TPAMI.2020.3033291. Epub 2022 Mar 4.
10
Compression Helps Deep Learning in Image Classification.压缩助力图像分类中的深度学习。
Entropy (Basel). 2021 Jul 10;23(7):881. doi: 10.3390/e23070881.