Suppr超能文献

ReStoCNet:用于高效内存神经形态计算的残差随机二值卷积脉冲神经网络

ReStoCNet: Residual Stochastic Binary Convolutional Spiking Neural Network for Memory-Efficient Neuromorphic Computing.

作者信息

Srinivasan Gopalakrishnan, Roy Kaushik

机构信息

Department of ECE, Purdue University, West Lafayette, IN, United States.

出版信息

Front Neurosci. 2019 Mar 19;13:189. doi: 10.3389/fnins.2019.00189. eCollection 2019.

Abstract

In this work, we propose ReStoCNet, a residual stochastic multilayer convolutional Spiking Neural Network (SNN) composed of binary kernels, to reduce the synaptic memory footprint and enhance the computational efficiency of SNNs for complex pattern recognition tasks. ReStoCNet consists of an input layer followed by stacked convolutional layers for hierarchical input feature extraction, pooling layers for dimensionality reduction, and fully-connected layer for inference. In addition, we introduce residual connections between the stacked convolutional layers to improve the hierarchical feature learning capability of deep SNNs. We propose Spike Timing Dependent Plasticity (STDP) based probabilistic learning algorithm, referred to as Hybrid-STDP (HB-STDP), incorporating Hebbian and anti-Hebbian learning mechanisms, to train the binary kernels forming ReStoCNet in a layer-wise unsupervised manner. We demonstrate the efficacy of ReStoCNet and the presented HB-STDP based unsupervised training methodology on the MNIST and CIFAR-10 datasets. We show that residual connections enable the deeper convolutional layers to self-learn useful high-level input features and mitigate the accuracy loss observed in deep SNNs devoid of residual connections. The proposed ReStoCNet offers >20 × kernel memory compression compared to full-precision (32-bit) SNN while yielding high enough classification accuracy on the chosen pattern recognition tasks.

摘要

在这项工作中,我们提出了ReStoCNet,一种由二进制内核组成的残差随机多层卷积脉冲神经网络(SNN),以减少突触存储占用空间,并提高SNN在复杂模式识别任务中的计算效率。ReStoCNet由一个输入层、随后用于分层输入特征提取的堆叠卷积层、用于降维的池化层以及用于推理的全连接层组成。此外,我们在堆叠卷积层之间引入残差连接,以提高深度SNN的分层特征学习能力。我们提出了基于脉冲时间依赖可塑性(STDP)的概率学习算法,称为混合STDP(HB-STDP),它结合了赫布学习和反赫布学习机制,以逐层无监督的方式训练构成ReStoCNet的二进制内核。我们在MNIST和CIFAR-10数据集上展示了ReStoCNet和所提出的基于HB-STDP的无监督训练方法的有效性。我们表明,残差连接使更深的卷积层能够自学习有用的高级输入特征,并减轻在没有残差连接的深度SNN中观察到的精度损失。与全精度(32位)SNN相比,所提出的ReStoCNet提供了大于20倍的内核内存压缩,同时在所选的模式识别任务上产生了足够高的分类精度。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c3c5/6434391/ad00ea9bb69e/fnins-13-00189-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验