• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

随机和对抗性比特错误鲁棒性:节能且安全的深度神经网络加速器

Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators.

作者信息

Stutz David, Chandramoorthy Nandhini, Hein Matthias, Schiele Bernt

出版信息

IEEE Trans Pattern Anal Mach Intell. 2023 Mar;45(3):3632-3647. doi: 10.1109/TPAMI.2022.3181972.

DOI:10.1109/TPAMI.2022.3181972
PMID:37815955
Abstract

Deep neural network (DNN) accelerators received considerable attention in recent years due to the potential to save energy compared to mainstream hardware. Low-voltage operation of DNN accelerators allows to further reduce energy consumption, however, causes bit-level failures in the memory storing the quantized weights. Furthermore, DNN accelerators are vulnerable to adversarial attacks on voltage controllers or individual bits. In this paper, we show that a combination of robust fixed-point quantization, weight clipping, as well as random bit error training (RandBET) or adversarial bit error training (AdvBET) improves robustness against random or adversarial bit errors in quantized DNN weights significantly. This leads not only to high energy savings for low-voltage operation as well as low-precision quantization, but also improves security of DNN accelerators. In contrast to related work, our approach generalizes across operating voltages and accelerators and does not require hardware changes. Moreover, we present a novel adversarial bit error attack and are able to obtain robustness against both targeted and untargeted bit-level attacks. Without losing more than 0.8%/2% in test accuracy, we can reduce energy consumption on CIFAR10by 20%/30% for 8/4-bit quantization. Allowing up to 320 adversarial bit errors, we reduce test error from above 90% (chance level) to 26.22%.

摘要

近年来,深度神经网络(DNN)加速器因其与主流硬件相比具有节能潜力而备受关注。DNN加速器的低电压运行能够进一步降低能耗,然而,这会导致存储量化权重的内存中出现位级故障。此外,DNN加速器容易受到对电压控制器或单个比特的对抗性攻击。在本文中,我们表明,稳健的定点量化、权重裁剪以及随机比特错误训练(RandBET)或对抗性比特错误训练(AdvBET)相结合,可显著提高量化DNN权重对随机或对抗性比特错误的鲁棒性。这不仅能为低电压运行和低精度量化带来高节能效果,还能提高DNN加速器的安全性。与相关工作不同,我们的方法可在不同工作电压和加速器上通用,且无需硬件更改。此外,我们提出了一种新颖的对抗性比特错误攻击,并能够获得针对有目标和无目标比特级攻击的鲁棒性。在测试准确率损失不超过0.8%/2%的情况下,对于8/4位量化,我们可以将CIFAR10上的能耗降低20%/30%。允许高达320个对抗性比特错误,我们将测试错误率从90%以上(随机水平)降低到26.22%。

相似文献

1
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators.随机和对抗性比特错误鲁棒性:节能且安全的深度神经网络加速器
IEEE Trans Pattern Anal Mach Intell. 2023 Mar;45(3):3632-3647. doi: 10.1109/TPAMI.2022.3181972.
2
Training high-performance and large-scale deep neural networks with full 8-bit integers.用全 8 位整数训练高性能和大规模深度神经网络。
Neural Netw. 2020 May;125:70-82. doi: 10.1016/j.neunet.2019.12.027. Epub 2020 Jan 15.
3
T-BFA: Targeted Bit-Flip Adversarial Weight Attack.T-BFA:靶向位翻转对抗权重攻击。
IEEE Trans Pattern Anal Mach Intell. 2021 Sep 16;PP. doi: 10.1109/TPAMI.2021.3112932.
4
Exploiting Retraining-Based Mixed-Precision Quantization for Low-Cost DNN Accelerator Design.利用基于再训练的混合精度量化进行低成本深度神经网络加速器设计。
IEEE Trans Neural Netw Learn Syst. 2021 Jul;32(7):2925-2938. doi: 10.1109/TNNLS.2020.3008996. Epub 2021 Jul 6.
5
Low Complexity Gradient Computation Techniques to Accelerate Deep Neural Network Training.加速深度神经网络训练的低复杂度梯度计算技术
IEEE Trans Neural Netw Learn Syst. 2023 Sep;34(9):5745-5759. doi: 10.1109/TNNLS.2021.3130991. Epub 2023 Sep 1.
6
Hybrid Precision Floating-Point (HPFP) Selection to Optimize Hardware-Constrained Accelerator for CNN Training.用于优化受硬件约束的CNN训练加速器的混合精度浮点(HPFP)选择
Sensors (Basel). 2024 Mar 27;24(7):2145. doi: 10.3390/s24072145.
7
IVS-Caffe-Hardware-Oriented Neural Network Model Development.基于 IVS 硬件的面向神经网络模型开发。
IEEE Trans Neural Netw Learn Syst. 2022 Oct;33(10):5978-5992. doi: 10.1109/TNNLS.2021.3072145. Epub 2022 Oct 5.
8
Designing Efficient Bit-Level Sparsity-Tolerant Memristive Networks.设计高效的位级容稀疏忆阻网络。
IEEE Trans Neural Netw Learn Syst. 2024 Sep;35(9):11979-11988. doi: 10.1109/TNNLS.2023.3250437. Epub 2024 Sep 3.
9
Design of Hardware Accelerators for Optimized and Quantized Neural Networks to Detect Atrial Fibrillation in Patch ECG Device with RISC-V.基于 RISC-V 的贴片式心电图设备中用于检测心房颤动的优化与量化神经网络硬件加速器设计。
Sensors (Basel). 2023 Mar 1;23(5):2703. doi: 10.3390/s23052703.
10
Unsupervised Network Quantization via Fixed-Point Factorization.通过定点分解实现无监督网络量化
IEEE Trans Neural Netw Learn Syst. 2021 Jun;32(6):2706-2720. doi: 10.1109/TNNLS.2020.3007749. Epub 2021 Jun 2.

引用本文的文献

1
The inherent adversarial robustness of analog in-memory computing.模拟内存计算固有的对抗鲁棒性。
Nat Commun. 2025 Feb 19;16(1):1756. doi: 10.1038/s41467-025-56595-2.
2
Adversarial attacks on spiking convolutional neural networks for event-based vision.针对基于事件的视觉的脉冲卷积神经网络的对抗攻击。
Front Neurosci. 2022 Dec 22;16:1068193. doi: 10.3389/fnins.2022.1068193. eCollection 2022.