Zhang Malu, Wang Jiadong, Wu Jibin, Belatreche Ammar, Amornpaisannon Burin, Zhang Zhixuan, Miriyala Venkata Pavan Kumar, Qu Hong, Chua Yansong, Carlson Trevor E, Li Haizhou
IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):1947-1958. doi: 10.1109/TNNLS.2021.3110991. Epub 2022 May 2.
Spiking neural networks (SNNs) use spatiotemporal spike patterns to represent and transmit information, which are not only biologically realistic but also suitable for ultralow-power event-driven neuromorphic implementation. Just like other deep learning techniques, deep SNNs (DeepSNNs) benefit from the deep architecture. However, the training of DeepSNNs is not straightforward because the well-studied error backpropagation (BP) algorithm is not directly applicable. In this article, we first establish an understanding as to why error BP does not work well in DeepSNNs. We then propose a simple yet efficient rectified linear postsynaptic potential function (ReL-PSP) for spiking neurons and a spike-timing-dependent BP (STDBP) learning algorithm for DeepSNNs where the timing of individual spikes is used to convey information (temporal coding), and learning (BP) is performed based on spike timing in an event-driven manner. We show that DeepSNNs trained with the proposed single spike time-based learning algorithm can achieve the state-of-the-art classification accuracy. Furthermore, by utilizing the trained model parameters obtained from the proposed STDBP learning algorithm, we demonstrate ultralow-power inference operations on a recently proposed neuromorphic inference accelerator. The experimental results also show that the neuromorphic hardware consumes 0.751 mW of the total power consumption and achieves a low latency of 47.71 ms to classify an image from the Modified National Institute of Standards and Technology (MNIST) dataset. Overall, this work investigates the contribution of spike timing dynamics for information encoding, synaptic plasticity, and decision-making, providing a new perspective to the design of future DeepSNNs and neuromorphic hardware.
脉冲神经网络(SNN)使用时空脉冲模式来表示和传输信息,这种模式不仅具有生物学现实意义,而且适用于超低功耗的事件驱动神经形态实现。与其他深度学习技术一样,深度脉冲神经网络(DeepSNN)受益于深度架构。然而,DeepSNN的训练并不简单,因为经过充分研究的误差反向传播(BP)算法不能直接应用。在本文中,我们首先弄清楚为什么误差BP在DeepSNN中效果不佳。然后,我们为脉冲神经元提出了一种简单而有效的整流线性突触后电位函数(ReL-PSP),以及一种用于DeepSNN的基于脉冲时间的BP(STDBP)学习算法,其中单个脉冲的时间用于传递信息(时间编码),并且学习(BP)是以事件驱动的方式基于脉冲时间进行的。我们表明,使用所提出的基于单个脉冲时间的学习算法训练的DeepSNN可以达到当前最优的分类准确率。此外,通过利用从所提出的STDBP学习算法获得的训练模型参数,我们在最近提出的神经形态推理加速器上演示了超低功耗推理操作。实验结果还表明,神经形态硬件在总功耗中消耗0.751 mW,并且在对来自美国国家标准与技术研究院修改版(MNIST)数据集的图像进行分类时实现了47.71 ms的低延迟。总体而言,这项工作研究了脉冲时间动态对信息编码、突触可塑性和决策的贡献,为未来DeepSNN和神经形态硬件的设计提供了新的视角。