Suppr超能文献

SSTDP:用于高效脉冲神经网络训练的监督式脉冲时间依赖可塑性

SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training.

作者信息

Liu Fangxin, Zhao Wenbo, Chen Yongbiao, Wang Zongwu, Yang Tao, Jiang Li

机构信息

School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.

Shanghai Qi Zhi Institute, Shanghai, China.

出版信息

Front Neurosci. 2021 Nov 4;15:756876. doi: 10.3389/fnins.2021.756876. eCollection 2021.

Abstract

Spiking Neural Networks (SNNs) are a pathway that could potentially empower low-power event-driven neuromorphic hardware due to their spatio-temporal information processing capability and high biological plausibility. Although SNNs are currently more efficient than artificial neural networks (ANNs), they are not as accurate as ANNs. Error backpropagation is the most common method for directly training neural networks, promoting the prosperity of ANNs in various deep learning fields. However, since the signals transmitted in the SNN are non-differentiable discrete binary spike events, the activation function in the form of spikes presents difficulties for the gradient-based optimization algorithms to be directly applied in SNNs, leading to a performance gap (i.e., accuracy and latency) between SNNs and ANNs. This paper introduces a new learning algorithm, called SSTDP, which bridges the gap between backpropagation (BP)-based learning and spike-time-dependent plasticity (STDP)-based learning to train SNNs efficiently. The scheme incorporates the global optimization process from BP and the efficient weight update derived from STDP. It not only avoids the non-differentiable derivation in the BP process but also utilizes the local feature extraction property of STDP. Consequently, our method can lower the possibility of vanishing spikes in BP training and reduce the number of time steps to reduce network latency. In SSTDP, we employ temporal-based coding and use Integrate-and-Fire (IF) neuron as the neuron model to provide considerable computational benefits. Our experiments show the effectiveness of the proposed SSTDP learning algorithm on the SNN by achieving the best classification accuracy 99.3% on the Caltech 101 dataset, 98.1% on the MNIST dataset, and 91.3% on the CIFAR-10 dataset compared to other SNNs trained with other learning methods. It also surpasses the best inference accuracy of the directly trained SNN with 2532× less inference latency. Moreover, we analyze event-based computations to demonstrate the efficacy of the SNN for inference operation in the spiking domain, and SSTDP methods can achieve 1.337.7× fewer addition operations per inference. The code is available at: https://github.com/MXHX7199/SNN-SSTDP.

摘要

脉冲神经网络(SNN)是一种可能使低功耗事件驱动神经形态硬件具备能力的途径,这得益于其时空信息处理能力和高度的生物合理性。尽管目前SNN比人工神经网络(ANN)更高效,但它们不如ANN准确。误差反向传播是直接训练神经网络最常用的方法,推动了ANN在各个深度学习领域的繁荣发展。然而,由于在SNN中传输的信号是不可微的离散二进制脉冲事件,以脉冲形式的激活函数给基于梯度的优化算法直接应用于SNN带来了困难,导致SNN和ANN之间存在性能差距(即准确性和延迟)。本文介绍了一种名为SSTDP的新学习算法,它弥合了基于反向传播(BP)的学习和基于脉冲时间依赖可塑性(STDP)的学习之间的差距,以有效地训练SNN。该方案融合了BP的全局优化过程和从STDP导出的高效权重更新。它不仅避免了BP过程中的不可微推导,还利用了STDP的局部特征提取特性。因此,我们的方法可以降低BP训练中脉冲消失的可能性,并减少时间步数以降低网络延迟。在SSTDP中,我们采用基于时间的编码,并使用积分发放(IF)神经元作为神经元模型,以提供可观的计算优势。我们的实验表明,与使用其他学习方法训练的其他SNN相比,所提出的SSTDP学习算法在SNN上是有效的,在加州理工学院101数据集上达到了99.3%的最佳分类准确率,在MNIST数据集上达到了98.1%,在CIFAR - 10数据集上达到了91.3%。它还以低25至32倍的推理延迟超过了直接训练的SNN的最佳推理准确率。此外,我们分析了基于事件的计算,以证明SNN在脉冲域中进行推理操作的有效性,并且SSTDP方法每次推理可以实现少1.3至37.7倍的加法运算。代码可在以下网址获取:https://github.com/MXHX7199/SNN - SSTDP 。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验