Nanjing University of Science and Technology, Nanjing, 210094, China.
Chinese Academy of Sciences, China.
Neural Netw. 2023 Aug;165:164-174. doi: 10.1016/j.neunet.2023.05.038. Epub 2023 May 24.
Spiking Neural Network (SNN) has been recognized as the third generation of neural networks. Conventionally, a SNN can be converted from a pre-trained Artificial Neural Network (ANN) with less computation and memory than training from scratch. But, these converted SNNs are vulnerable to adversarial attacks. Numerical experiments demonstrate that the SNN trained by optimizing the loss function will be more adversarial robust, but the theoretical analysis for the mechanism of robustness is lacking. In this paper, we provide a theoretical explanation by analyzing the expected risk function. Starting by modeling the stochastic process introduced by the Poisson encoder, we prove that there is a positive semidefinite regularizer. Perhaps surprisingly, this regularizer can make the gradients of the output with respect to input closer to zero, thus resulting in inherent robustness against adversarial attacks. Extensive experiments on the CIFAR10 and CIFAR100 datasets support our point of view. For example, we find that the sum of squares of the gradients of the converted SNNs is 13∼160 times that of the trained SNNs. And, the smaller the sum of the squares of the gradients, the smaller the degradation of accuracy under adversarial attack.
尖峰神经网络(SNN)已被公认为第三代神经网络。传统上,可以通过比从头开始训练更少的计算和内存将 SNN 从预先训练的人工神经网络(ANN)转换而来。但是,这些转换后的 SNN 容易受到对抗攻击。数值实验表明,通过优化损失函数训练的 SNN 具有更强的对抗鲁棒性,但是缺乏对鲁棒性机制的理论分析。在本文中,我们通过分析期望风险函数提供了一个理论解释。首先,通过对泊松编码器引入的随机过程进行建模,我们证明存在正定半定正则化项。也许令人惊讶的是,该正则化项可以使输出相对于输入的梯度更接近零,从而导致对对抗攻击具有内在的鲁棒性。在 CIFAR10 和 CIFAR100 数据集上的大量实验支持了我们的观点。例如,我们发现转换后的 SNN 的梯度平方和是训练后的 SNN 的梯度平方和的 13∼160 倍。并且,梯度平方和越小,对抗攻击下精度的下降越小。