IEEE Trans Biomed Circuits Syst. 2019 Dec;13(6):1664-1677. doi: 10.1109/TBCAS.2019.2945406. Epub 2019 Oct 4.
In this paper, we present an energy and area efficient spike neural network (SNN) processor based on novel spike counts based methods. For the low cost SNN design, we propose hardware-friendly complexity reduction techniques for both of learning and inferencing modes of operations. First, for the unsupervised learning process, we propose a spike counts based learning method. The novel learning approach utilizes pre- and post-synaptic spike counts to reduce the bit-width of synaptic weights as well as the number of weight updates. For the energy efficient inferencing operations, we propose an accumulation based computing scheme, where the number of input spikes for each input axon is accumulated without instant membrane updates until the pre-defined number of spikes are reached. In addition, the computation skip schemes identify meaningless computations and skip them to improve energy efficiency. Based on the proposed low complexity design techniques, we design and implement the SNN processor using 65 nm CMOS process. According to the implementation results, the SNN processor achieves 87.4% of recognition accuracy in MNIST dataset using only 1-bit 230 k synaptic weights with 400 excitatory neurons. The energy consumptions are 0.26 pJ/SOP and 0.31 μJ/inference in inferencing mode, and 1.42 pJ/SOP and 2.63 μJ/learning in learning mode of operations.
在本文中,我们提出了一种基于新型基于尖峰计数的方法的节能和面积有效的尖峰神经网络(SNN)处理器。对于低成本的 SNN 设计,我们为学习和推断操作的两种模式提出了硬件友好的复杂度降低技术。首先,对于无监督学习过程,我们提出了一种基于尖峰计数的学习方法。这种新颖的学习方法利用前突触和后突触的尖峰计数来减少突触权重的位宽以及权重更新的数量。对于节能的推断操作,我们提出了一种基于累积的计算方案,其中每个输入轴突的输入尖峰数在达到预定义的尖峰数之前累积,而不会立即更新膜电位。此外,计算跳过方案识别无意义的计算并跳过它们以提高能源效率。基于所提出的低复杂度设计技术,我们使用 65nmCMOS 工艺设计和实现了 SNN 处理器。根据实现结果,SNN 处理器在 MNIST 数据集上仅使用 1 位 230k 突触权重和 400 个兴奋性神经元即可实现 87.4%的识别准确率。在推断模式下,能量消耗为 0.26pJ/SOP,在推断模式下为 0.31μJ/推断,在学习模式下为 1.42pJ/SOP,在学习模式下为 2.63μJ/学习。