Suppr超能文献

用少量时间步训练更深的尖峰神经网络。

Training much deeper spiking neural networks with a small number of time-steps.

机构信息

The Chinese University of Hong Kong, Shenzhen, China; Shenzhen Research Institute of Big Data, Shenzhen 518115, China.

Center for Data Science, Peking University, China.

出版信息

Neural Netw. 2022 Sep;153:254-268. doi: 10.1016/j.neunet.2022.06.001. Epub 2022 Jun 15.

Abstract

Spiking Neural Network (SNN) is a promising energy-efficient neural architecture when implemented on neuromorphic hardware. The Artificial Neural Network (ANN) to SNN conversion method, which is the most effective SNN training method, has successfully converted moderately deep ANNs to SNNs with satisfactory performance. However, this method requires a large number of time-steps, which hurts the energy efficiency of SNNs. How to effectively covert a very deep ANN (e.g., more than 100 layers) to an SNN with a small number of time-steps remains a difficult task. To tackle this challenge, this paper makes the first attempt to propose a novel error analysis framework that takes both the "quantization error" and the "deviation error" into account, which comes from the discretization of SNN dynamicsthe neuron's coding scheme and the inconstant input currents at intermediate layers, respectively. Particularly, our theories reveal that the "deviation error" depends on both the spike threshold and the input variance. Based on our theoretical analysis, we further propose the Threshold Tuning and Residual Block Restructuring (TTRBR) method that can convert very deep ANNs (>100 layers) to SNNs with negligible accuracy degradation while requiring only a small number of time-steps. With very deep networks, our TTRBR method achieves state-of-the-art (SOTA) performance on the CIFAR-10, CIFAR-100, and ImageNet classification tasks.

摘要

尖峰神经网络 (SNN) 在神经形态硬件上实现时是一种很有前途的节能神经架构。人工神经网络 (ANN) 到 SNN 的转换方法是最有效的 SNN 训练方法,它已经成功地将中等深度的 ANN 转换为具有令人满意性能的 SNN。然而,这种方法需要大量的时间步长,这会损害 SNN 的能效。如何有效地将非常深的 ANN(例如,超过 100 层)转换为具有少量时间步长的 SNN 仍然是一个难题。为了解决这个挑战,本文首次尝试提出了一种新的误差分析框架,该框架同时考虑了“量化误差”和“偏差误差”,分别来自 SNN 动力学的离散化、神经元的编码方案以及中间层的不定输入电流。特别是,我们的理论揭示了“偏差误差”既取决于尖峰阈值又取决于输入方差。基于我们的理论分析,我们进一步提出了阈值调整和残差块重构 (TTRBR) 方法,该方法可以将非常深的 ANN(>100 层)转换为 SNN,而几乎不会降低准确性,同时只需要少量的时间步长。对于非常深的网络,我们的 TTRBR 方法在 CIFAR-10、CIFAR-100 和 ImageNet 分类任务上实现了最先进的 (SOTA) 性能。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验