• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

快速脉冲神经网络:通过量化人工神经网络转换实现的快速脉冲神经网络

Fast-SNN: Fast Spiking Neural Network by Converting Quantized ANN.

作者信息

Hu Yangfan, Zheng Qian, Jiang Xudong, Pan Gang

出版信息

IEEE Trans Pattern Anal Mach Intell. 2023 Dec;45(12):14546-14562. doi: 10.1109/TPAMI.2023.3275769. Epub 2023 Nov 3.

DOI:10.1109/TPAMI.2023.3275769
PMID:37721891
Abstract

Spiking neural networks (SNNs) have shown advantages in computation and energy efficiency over traditional artificial neural networks (ANNs) thanks to their event-driven representations. SNNs also replace weight multiplications in ANNs with additions, which are more energy-efficient and less computationally intensive. However, it remains a challenge to train deep SNNs due to the discrete spiking function. A popular approach to circumvent this challenge is ANN-to-SNN conversion. However, due to the quantization error and accumulating error, it often requires lots of time steps (high inference latency) to achieve high performance, which negates SNN's advantages. To this end, this paper proposes Fast-SNN that achieves high performance with low latency. We demonstrate the equivalent mapping between temporal quantization in SNNs and spatial quantization in ANNs, based on which the minimization of the quantization error is transferred to quantized ANN training. With the minimization of the quantization error, we show that the sequential error is the primary cause of the accumulating error, which is addressed by introducing a signed IF neuron model and a layer-wise fine-tuning mechanism. Our method achieves state-of-the-art performance and low latency on various computer vision tasks, including image classification, object detection, and semantic segmentation. Codes are available at: https://github.com/yangfan-hu/Fast-SNN.

摘要

由于其基于事件的表示方式,脉冲神经网络(SNN)在计算和能源效率方面已显示出优于传统人工神经网络(ANN)的优势。SNN还用加法取代了ANN中的权重乘法,这在能源效率上更高,计算强度也更低。然而,由于离散的脉冲函数,训练深度SNN仍然是一个挑战。一种规避这一挑战的常用方法是将ANN转换为SNN。然而,由于量化误差和累积误差,要实现高性能通常需要大量的时间步长(高推理延迟),这抵消了SNN的优势。为此,本文提出了Fast-SNN,它能以低延迟实现高性能。我们展示了SNN中的时间量化与ANN中的空间量化之间的等效映射,在此基础上,量化误差的最小化被转移到量化的ANN训练中。通过最小化量化误差,我们表明顺序误差是累积误差的主要原因,通过引入带符号的IF神经元模型和逐层微调机制来解决这一问题。我们的方法在各种计算机视觉任务上实现了最先进的性能和低延迟,包括图像分类、目标检测和语义分割。代码可在以下网址获取:https://github.com/yangfan-hu/Fast-SNN 。

相似文献

1
Fast-SNN: Fast Spiking Neural Network by Converting Quantized ANN.快速脉冲神经网络:通过量化人工神经网络转换实现的快速脉冲神经网络
IEEE Trans Pattern Anal Mach Intell. 2023 Dec;45(12):14546-14562. doi: 10.1109/TPAMI.2023.3275769. Epub 2023 Nov 3.
2
Quantization Framework for Fast Spiking Neural Networks.快速脉冲神经网络的量化框架
Front Neurosci. 2022 Jul 19;16:918793. doi: 10.3389/fnins.2022.918793. eCollection 2022.
3
Toward High-Accuracy and Low-Latency Spiking Neural Networks With Two-Stage Optimization.迈向具有两阶段优化的高精度低延迟脉冲神经网络
IEEE Trans Neural Netw Learn Syst. 2025 Feb;36(2):3189-3203. doi: 10.1109/TNNLS.2023.3337176. Epub 2025 Feb 6.
4
High-accuracy deep ANN-to-SNN conversion using quantization-aware training framework and calcium-gated bipolar leaky integrate and fire neuron.使用量化感知训练框架和钙门控双极泄漏积分发放神经元实现高精度深度人工神经网络到脉冲神经网络的转换。
Front Neurosci. 2023 Mar 8;17:1141701. doi: 10.3389/fnins.2023.1141701. eCollection 2023.
5
SpQuant-SNN: ultra-low precision membrane potential with sparse activations unlock the potential of on-device spiking neural networks applications.SpQuant-SNN:具有稀疏激活的超低精度膜电位开启了片上脉冲神经网络应用的潜力。
Front Neurosci. 2024 Sep 4;18:1440000. doi: 10.3389/fnins.2024.1440000. eCollection 2024.
6
Training much deeper spiking neural networks with a small number of time-steps.用少量时间步训练更深的尖峰神经网络。
Neural Netw. 2022 Sep;153:254-268. doi: 10.1016/j.neunet.2022.06.001. Epub 2022 Jun 15.
7
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training.SSTDP:用于高效脉冲神经网络训练的监督式脉冲时间依赖可塑性
Front Neurosci. 2021 Nov 4;15:756876. doi: 10.3389/fnins.2021.756876. eCollection 2021.
8
Spiking neural networks fine-tuning for brain image segmentation.用于脑图像分割的脉冲神经网络微调
Front Neurosci. 2023 Nov 1;17:1267639. doi: 10.3389/fnins.2023.1267639. eCollection 2023.
9
A universal ANN-to-SNN framework for achieving high accuracy and low latency deep Spiking Neural Networks.一种通用的 ANN-to-SNN 框架,可实现高精度和低延迟的深度尖峰神经网络。
Neural Netw. 2024 Jun;174:106244. doi: 10.1016/j.neunet.2024.106244. Epub 2024 Mar 15.
10
Spiking Deep Residual Networks.尖峰深度残差网络
IEEE Trans Neural Netw Learn Syst. 2023 Aug;34(8):5200-5205. doi: 10.1109/TNNLS.2021.3119238. Epub 2023 Aug 4.

引用本文的文献

1
A multisynaptic spiking neuron for simultaneously encoding spatiotemporal dynamics.一种用于同时编码时空动态的多突触发放神经元。
Nat Commun. 2025 Aug 4;16(1):7155. doi: 10.1038/s41467-025-62251-6.
2
Dynamic spatio-temporal pruning for efficient spiking neural networks.用于高效脉冲神经网络的动态时空剪枝
Front Neurosci. 2025 Mar 25;19:1545583. doi: 10.3389/fnins.2025.1545583. eCollection 2025.
3
An all integer-based spiking neural network with dynamic threshold adaptation.一种具有动态阈值自适应的全整数型脉冲神经网络。
Front Neurosci. 2024 Dec 17;18:1449020. doi: 10.3389/fnins.2024.1449020. eCollection 2024.
4
Darwin3: a large-scale neuromorphic chip with a novel ISA and on-chip learning.达尔文3:一款具有新型指令集架构和片上学习功能的大规模神经形态芯片。
Natl Sci Rev. 2024 Mar 18;11(5):nwae102. doi: 10.1093/nsr/nwae102. eCollection 2024 May.