• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用少量时间步训练更深的尖峰神经网络。

Training much deeper spiking neural networks with a small number of time-steps.

机构信息

The Chinese University of Hong Kong, Shenzhen, China; Shenzhen Research Institute of Big Data, Shenzhen 518115, China.

Center for Data Science, Peking University, China.

出版信息

Neural Netw. 2022 Sep;153:254-268. doi: 10.1016/j.neunet.2022.06.001. Epub 2022 Jun 15.

DOI:10.1016/j.neunet.2022.06.001
PMID:35759953
Abstract

Spiking Neural Network (SNN) is a promising energy-efficient neural architecture when implemented on neuromorphic hardware. The Artificial Neural Network (ANN) to SNN conversion method, which is the most effective SNN training method, has successfully converted moderately deep ANNs to SNNs with satisfactory performance. However, this method requires a large number of time-steps, which hurts the energy efficiency of SNNs. How to effectively covert a very deep ANN (e.g., more than 100 layers) to an SNN with a small number of time-steps remains a difficult task. To tackle this challenge, this paper makes the first attempt to propose a novel error analysis framework that takes both the "quantization error" and the "deviation error" into account, which comes from the discretization of SNN dynamicsthe neuron's coding scheme and the inconstant input currents at intermediate layers, respectively. Particularly, our theories reveal that the "deviation error" depends on both the spike threshold and the input variance. Based on our theoretical analysis, we further propose the Threshold Tuning and Residual Block Restructuring (TTRBR) method that can convert very deep ANNs (>100 layers) to SNNs with negligible accuracy degradation while requiring only a small number of time-steps. With very deep networks, our TTRBR method achieves state-of-the-art (SOTA) performance on the CIFAR-10, CIFAR-100, and ImageNet classification tasks.

摘要

尖峰神经网络 (SNN) 在神经形态硬件上实现时是一种很有前途的节能神经架构。人工神经网络 (ANN) 到 SNN 的转换方法是最有效的 SNN 训练方法,它已经成功地将中等深度的 ANN 转换为具有令人满意性能的 SNN。然而,这种方法需要大量的时间步长,这会损害 SNN 的能效。如何有效地将非常深的 ANN(例如,超过 100 层)转换为具有少量时间步长的 SNN 仍然是一个难题。为了解决这个挑战,本文首次尝试提出了一种新的误差分析框架,该框架同时考虑了“量化误差”和“偏差误差”,分别来自 SNN 动力学的离散化、神经元的编码方案以及中间层的不定输入电流。特别是,我们的理论揭示了“偏差误差”既取决于尖峰阈值又取决于输入方差。基于我们的理论分析,我们进一步提出了阈值调整和残差块重构 (TTRBR) 方法,该方法可以将非常深的 ANN(>100 层)转换为 SNN,而几乎不会降低准确性,同时只需要少量的时间步长。对于非常深的网络,我们的 TTRBR 方法在 CIFAR-10、CIFAR-100 和 ImageNet 分类任务上实现了最先进的 (SOTA) 性能。

相似文献

1
Training much deeper spiking neural networks with a small number of time-steps.用少量时间步训练更深的尖峰神经网络。
Neural Netw. 2022 Sep;153:254-268. doi: 10.1016/j.neunet.2022.06.001. Epub 2022 Jun 15.
2
Fast-SNN: Fast Spiking Neural Network by Converting Quantized ANN.快速脉冲神经网络:通过量化人工神经网络转换实现的快速脉冲神经网络
IEEE Trans Pattern Anal Mach Intell. 2023 Dec;45(12):14546-14562. doi: 10.1109/TPAMI.2023.3275769. Epub 2023 Nov 3.
3
Spiking Deep Residual Networks.尖峰深度残差网络
IEEE Trans Neural Netw Learn Syst. 2023 Aug;34(8):5200-5205. doi: 10.1109/TNNLS.2021.3119238. Epub 2023 Aug 4.
4
A universal ANN-to-SNN framework for achieving high accuracy and low latency deep Spiking Neural Networks.一种通用的 ANN-to-SNN 框架,可实现高精度和低延迟的深度尖峰神经网络。
Neural Netw. 2024 Jun;174:106244. doi: 10.1016/j.neunet.2024.106244. Epub 2024 Mar 15.
5
Self-architectural knowledge distillation for spiking neural networks.用于脉冲神经网络的自架构知识蒸馏
Neural Netw. 2024 Oct;178:106475. doi: 10.1016/j.neunet.2024.106475. Epub 2024 Jun 19.
6
High-accuracy deep ANN-to-SNN conversion using quantization-aware training framework and calcium-gated bipolar leaky integrate and fire neuron.使用量化感知训练框架和钙门控双极泄漏积分发放神经元实现高精度深度人工神经网络到脉冲神经网络的转换。
Front Neurosci. 2023 Mar 8;17:1141701. doi: 10.3389/fnins.2023.1141701. eCollection 2023.
7
Toward High-Accuracy and Low-Latency Spiking Neural Networks With Two-Stage Optimization.迈向具有两阶段优化的高精度低延迟脉冲神经网络
IEEE Trans Neural Netw Learn Syst. 2025 Feb;36(2):3189-3203. doi: 10.1109/TNNLS.2023.3337176. Epub 2025 Feb 6.
8
Rethinking the performance comparison between SNNS and ANNS.重新思考 SNNS 和 ANNS 的性能比较。
Neural Netw. 2020 Jan;121:294-307. doi: 10.1016/j.neunet.2019.09.005. Epub 2019 Sep 19.
9
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training.SSTDP:用于高效脉冲神经网络训练的监督式脉冲时间依赖可塑性
Front Neurosci. 2021 Nov 4;15:756876. doi: 10.3389/fnins.2021.756876. eCollection 2021.
10
Quantization Framework for Fast Spiking Neural Networks.快速脉冲神经网络的量化框架
Front Neurosci. 2022 Jul 19;16:918793. doi: 10.3389/fnins.2022.918793. eCollection 2022.

引用本文的文献

1
An all integer-based spiking neural network with dynamic threshold adaptation.一种具有动态阈值自适应的全整数型脉冲神经网络。
Front Neurosci. 2024 Dec 17;18:1449020. doi: 10.3389/fnins.2024.1449020. eCollection 2024.
2
Efficient training of spiking neural networks with temporally-truncated local backpropagation through time.通过时间上截断的局部反向传播对脉冲神经网络进行高效训练。
Front Neurosci. 2023 Apr 6;17:1047008. doi: 10.3389/fnins.2023.1047008. eCollection 2023.
3
Critically synchronized brain waves form an effective, robust and flexible basis for human memory and learning.
关键同步脑波为人类记忆和学习提供了有效、强大且灵活的基础。
Sci Rep. 2023 Mar 16;13(1):4343. doi: 10.1038/s41598-023-31365-6.