Stutz David, Chandramoorthy Nandhini, Hein Matthias, Schiele Bernt
IEEE Trans Pattern Anal Mach Intell. 2023 Mar;45(3):3632-3647. doi: 10.1109/TPAMI.2022.3181972.
Deep neural network (DNN) accelerators received considerable attention in recent years due to the potential to save energy compared to mainstream hardware. Low-voltage operation of DNN accelerators allows to further reduce energy consumption, however, causes bit-level failures in the memory storing the quantized weights. Furthermore, DNN accelerators are vulnerable to adversarial attacks on voltage controllers or individual bits. In this paper, we show that a combination of robust fixed-point quantization, weight clipping, as well as random bit error training (RandBET) or adversarial bit error training (AdvBET) improves robustness against random or adversarial bit errors in quantized DNN weights significantly. This leads not only to high energy savings for low-voltage operation as well as low-precision quantization, but also improves security of DNN accelerators. In contrast to related work, our approach generalizes across operating voltages and accelerators and does not require hardware changes. Moreover, we present a novel adversarial bit error attack and are able to obtain robustness against both targeted and untargeted bit-level attacks. Without losing more than 0.8%/2% in test accuracy, we can reduce energy consumption on CIFAR10by 20%/30% for 8/4-bit quantization. Allowing up to 320 adversarial bit errors, we reduce test error from above 90% (chance level) to 26.22%.
近年来,深度神经网络(DNN)加速器因其与主流硬件相比具有节能潜力而备受关注。DNN加速器的低电压运行能够进一步降低能耗,然而,这会导致存储量化权重的内存中出现位级故障。此外,DNN加速器容易受到对电压控制器或单个比特的对抗性攻击。在本文中,我们表明,稳健的定点量化、权重裁剪以及随机比特错误训练(RandBET)或对抗性比特错误训练(AdvBET)相结合,可显著提高量化DNN权重对随机或对抗性比特错误的鲁棒性。这不仅能为低电压运行和低精度量化带来高节能效果,还能提高DNN加速器的安全性。与相关工作不同,我们的方法可在不同工作电压和加速器上通用,且无需硬件更改。此外,我们提出了一种新颖的对抗性比特错误攻击,并能够获得针对有目标和无目标比特级攻击的鲁棒性。在测试准确率损失不超过0.8%/2%的情况下,对于8/4位量化,我们可以将CIFAR10上的能耗降低20%/30%。允许高达320个对抗性比特错误,我们将测试错误率从90%以上(随机水平)降低到26.22%。