Lammie Corey, Büchel Julian, Vasilopoulos Athanasios, Le Gallo Manuel, Sebastian Abu
IBM Research Europe, Rüschlikon, Switzerland.
Nat Commun. 2025 Feb 19;16(1):1756. doi: 10.1038/s41467-025-56595-2.
A key challenge for deep neural network algorithms is their vulnerability to adversarial attacks. Inherently non-deterministic compute substrates, such as those based on analog in-memory computing, have been speculated to provide significant adversarial robustness when performing deep neural network inference. In this paper, we experimentally validate this conjecture for the first time on an analog in-memory computing chip based on phase change memory devices. We demonstrate higher adversarial robustness against different types of adversarial attacks when implementing an image classification network. Additional robustness is also observed when performing hardware-in-the-loop attacks, for which the attacker is assumed to have full access to the hardware. A careful study of the various noise sources indicate that a combination of stochastic noise sources (both recurrent and non-recurrent) are responsible for the adversarial robustness and that their type and magnitude disproportionately effects this property. Finally, it is demonstrated, via simulations, that when a much larger transformer network is used to implement a natural language processing task, additional robustness is still observed.
深度神经网络算法面临的一个关键挑战是它们容易受到对抗性攻击。本质上具有非确定性的计算基板,例如基于模拟内存计算的基板,据推测在执行深度神经网络推理时能提供显著的对抗鲁棒性。在本文中,我们首次在基于相变存储器件的模拟内存计算芯片上通过实验验证了这一猜想。我们展示了在实现图像分类网络时,针对不同类型的对抗性攻击具有更高的对抗鲁棒性。在执行硬件在环攻击时也观察到了额外的鲁棒性,在这种攻击中,攻击者被假定可以完全访问硬件。对各种噪声源的仔细研究表明,随机噪声源(循环和非循环的)的组合是对抗鲁棒性的原因,并且它们的类型和大小对这种特性有不成比例的影响。最后,通过模拟表明,当使用大得多的变压器网络来实现自然语言处理任务时,仍然观察到了额外的鲁棒性。