Lee Chankyu, Sarwar Syed Shakib, Panda Priyadarshini, Srinivasan Gopalakrishnan, Roy Kaushik
Nanoelectronics Research Laboratory, School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, United States.
Front Neurosci. 2020 Feb 28;14:119. doi: 10.3389/fnins.2020.00119. eCollection 2020.
Spiking Neural Networks (SNNs) have recently emerged as a prominent neural computing paradigm. However, the typical shallow SNN architectures have limited capacity for expressing complex representations while training deep SNNs using input spikes has not been successful so far. Diverse methods have been proposed to get around this issue such as converting off-the-shelf trained deep Artificial Neural Networks (ANNs) to SNNs. However, the ANN-SNN conversion scheme fails to capture the temporal dynamics of a spiking system. On the other hand, it is still a difficult problem to directly train deep SNNs using input spike events due to the discontinuous, non-differentiable nature of the spike generation function. To overcome this problem, we propose an approximate derivative method that accounts for the leaky behavior of LIF neurons. This method enables training deep convolutional SNNs directly (with input spike events) using spike-based backpropagation. Our experiments show the effectiveness of the proposed spike-based learning on deep networks (VGG and Residual architectures) by achieving the best classification accuracies in MNIST, SVHN, and CIFAR-10 datasets compared to other SNNs trained with a spike-based learning. Moreover, we analyze sparse event-based computations to demonstrate the efficacy of the proposed SNN training method for inference operation in the spiking domain.
脉冲神经网络(SNNs)最近已成为一种突出的神经计算范式。然而,典型的浅层SNN架构在表达复杂表征方面能力有限,而到目前为止,使用输入脉冲训练深度SNN尚未成功。人们提出了各种方法来解决这个问题,比如将现成的经过训练的深度人工神经网络(ANNs)转换为SNN。然而,ANN-SNN转换方案无法捕捉脉冲系统的时间动态。另一方面,由于脉冲生成函数的不连续、不可微性质,直接使用输入脉冲事件训练深度SNN仍然是一个难题。为克服这个问题,我们提出一种考虑了LIF神经元泄漏行为的近似导数方法。该方法能够使用基于脉冲的反向传播直接(利用输入脉冲事件)训练深度卷积SNN。我们的实验表明,与其他使用基于脉冲的学习方法训练的SNN相比,所提出的基于脉冲的学习方法在深度网络(VGG和残差架构)上取得了MNIST、SVHN和CIFAR-10数据集中最佳的分类准确率,从而证明了其有效性。此外,我们分析基于稀疏事件的计算,以证明所提出的SNN训练方法在脉冲域推理操作中的有效性。