Department of Electrical and Computer Engineering and ISRC (Inter-University Semiconductor Research Center), Seoul National University, Seoul, 08826, Korea.
J Nanosci Nanotechnol. 2020 Nov 1;20(11):6603-6608. doi: 10.1166/jnn.2020.18772.
Deep learning represents state-of-the-art results in various machine learning tasks, but for applications that require real-time inference, the high computational cost of deep neural networks becomes a bottleneck for the efficiency. To overcome the high computational cost of deep neural networks, spiking neural networks (SNN) have been proposed. Herein, we propose a hardware implementation of the SNN with gated Schottky diodes as synaptic devices. In addition, we apply L1 regularization for connection pruning of the deep spiking neural networks using gated Schottky diodes as synap-tic devices. Applying L1 regularization eliminates the need for a re-training procedure because it prunes the weights based on the cost function. The compressed hardware-based SNN is energy efficient while achieving a classification accuracy of 97.85% which is comparable to 98.13% of the software deep neural networks (DNN).
深度学习在各种机器学习任务中代表了最先进的成果,但对于需要实时推理的应用程序,深度神经网络的高计算成本成为效率的瓶颈。为了克服深度神经网络的高计算成本,已经提出了尖峰神经网络(SNN)。在这里,我们提出了一种使用门控肖特基二极管作为突触器件的 SNN 的硬件实现。此外,我们应用 L1 正则化来进行使用门控肖特基二极管作为突触器件的深度尖峰神经网络的连接剪枝。应用 L1 正则化消除了重新训练过程的需要,因为它根据代价函数修剪权重。基于压缩硬件的 SNN 具有节能的优势,同时实现了 97.85%的分类准确率,与软件深度神经网络(DNN)的 98.13%相当。