• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用量化感知训练框架和钙门控双极泄漏积分发放神经元实现高精度深度人工神经网络到脉冲神经网络的转换。

High-accuracy deep ANN-to-SNN conversion using quantization-aware training framework and calcium-gated bipolar leaky integrate and fire neuron.

作者信息

Gao Haoran, He Junxian, Wang Haibing, Wang Tengxiao, Zhong Zhengqing, Yu Jianyi, Wang Ying, Tian Min, Shi Cong

机构信息

The School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, China.

State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China.

出版信息

Front Neurosci. 2023 Mar 8;17:1141701. doi: 10.3389/fnins.2023.1141701. eCollection 2023.

DOI:10.3389/fnins.2023.1141701
PMID:36968504
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10030499/
Abstract

Spiking neural networks (SNNs) have attracted intensive attention due to the efficient event-driven computing paradigm. Among SNN training methods, the ANN-to-SNN conversion is usually regarded to achieve state-of-the-art recognition accuracies. However, many existing ANN-to-SNN techniques impose lengthy post-conversion steps like threshold balancing and weight renormalization, to compensate for the inherent behavioral discrepancy between artificial and spiking neurons. In addition, they require a long temporal window to encode and process as many spikes as possible to better approximate the real-valued ANN neurons, leading to a high inference latency. To overcome these challenges, we propose a calcium-gated bipolar leaky integrate and fire (Ca-LIF) spiking neuron model to better approximate the functions of the ReLU neurons widely adopted in ANNs. We also propose a quantization-aware training (QAT)-based framework leveraging an off-the-shelf QAT toolkit for easy ANN-to-SNN conversion, which directly exports the learned ANN weights to SNNs requiring no post-conversion processing. We benchmarked our method on typical deep network structures with varying time-step lengths from 8 to 128. Compared to other research, our converted SNNs reported competitively high-accuracy performance, while enjoying relatively short inference time steps.

摘要

脉冲神经网络(SNN)由于其高效的事件驱动计算范式而备受关注。在SNN训练方法中,人工神经网络(ANN)到SNN的转换通常被认为能实现最先进的识别准确率。然而,许多现有的ANN到SNN技术需要冗长的转换后步骤,如阈值平衡和权重归一化,以弥补人工神经元和脉冲神经元之间固有的行为差异。此外,它们需要很长的时间窗口来编码和处理尽可能多的脉冲,以便更好地逼近实值的ANN神经元,从而导致较高的推理延迟。为了克服这些挑战,我们提出了一种钙门控双极泄漏积分发放(Ca-LIF)脉冲神经元模型,以更好地逼近ANN中广泛采用的ReLU神经元的功能。我们还提出了一个基于量化感知训练(QAT)的框架,利用现成的QAT工具包实现简单的ANN到SNN转换,该框架直接将学习到的ANN权重导出到SNN,无需进行转换后处理。我们在时间步长从8到128变化的典型深度网络结构上对我们的方法进行了基准测试。与其他研究相比,我们转换后的SNN在具有较高准确率的同时,推理时间步长相对较短。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c6a9/10030499/f00853183e0e/fnins-17-1141701-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c6a9/10030499/c406c4ad7c8c/fnins-17-1141701-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c6a9/10030499/43698e93d1b7/fnins-17-1141701-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c6a9/10030499/6f1b4c0bb898/fnins-17-1141701-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c6a9/10030499/b52bf607c8f7/fnins-17-1141701-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c6a9/10030499/f00853183e0e/fnins-17-1141701-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c6a9/10030499/c406c4ad7c8c/fnins-17-1141701-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c6a9/10030499/43698e93d1b7/fnins-17-1141701-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c6a9/10030499/6f1b4c0bb898/fnins-17-1141701-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c6a9/10030499/b52bf607c8f7/fnins-17-1141701-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c6a9/10030499/f00853183e0e/fnins-17-1141701-g0005.jpg

相似文献

1
High-accuracy deep ANN-to-SNN conversion using quantization-aware training framework and calcium-gated bipolar leaky integrate and fire neuron.使用量化感知训练框架和钙门控双极泄漏积分发放神经元实现高精度深度人工神经网络到脉冲神经网络的转换。
Front Neurosci. 2023 Mar 8;17:1141701. doi: 10.3389/fnins.2023.1141701. eCollection 2023.
2
Quantization Framework for Fast Spiking Neural Networks.快速脉冲神经网络的量化框架
Front Neurosci. 2022 Jul 19;16:918793. doi: 10.3389/fnins.2022.918793. eCollection 2022.
3
Enabling Spike-Based Backpropagation for Training Deep Neural Network Architectures.实现基于尖峰的反向传播以训练深度神经网络架构。
Front Neurosci. 2020 Feb 28;14:119. doi: 10.3389/fnins.2020.00119. eCollection 2020.
4
A universal ANN-to-SNN framework for achieving high accuracy and low latency deep Spiking Neural Networks.一种通用的 ANN-to-SNN 框架,可实现高精度和低延迟的深度尖峰神经网络。
Neural Netw. 2024 Jun;174:106244. doi: 10.1016/j.neunet.2024.106244. Epub 2024 Mar 15.
5
Fast-SNN: Fast Spiking Neural Network by Converting Quantized ANN.快速脉冲神经网络:通过量化人工神经网络转换实现的快速脉冲神经网络
IEEE Trans Pattern Anal Mach Intell. 2023 Dec;45(12):14546-14562. doi: 10.1109/TPAMI.2023.3275769. Epub 2023 Nov 3.
6
LIAF-Net: Leaky Integrate and Analog Fire Network for Lightweight and Efficient Spatiotemporal Information Processing.LIAF-Net:用于轻量级和高效时空信息处理的漏积分和模拟火灾网络。
IEEE Trans Neural Netw Learn Syst. 2022 Nov;33(11):6249-6262. doi: 10.1109/TNNLS.2021.3073016. Epub 2022 Oct 27.
7
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training.SSTDP:用于高效脉冲神经网络训练的监督式脉冲时间依赖可塑性
Front Neurosci. 2021 Nov 4;15:756876. doi: 10.3389/fnins.2021.756876. eCollection 2021.
8
Rethinking Pretraining as a Bridge From ANNs to SNNs.将预训练重新思考为从人工神经网络到脉冲神经网络的桥梁。
IEEE Trans Neural Netw Learn Syst. 2024 Jul;35(7):9054-9067. doi: 10.1109/TNNLS.2022.3217796. Epub 2024 Jul 8.
9
Training much deeper spiking neural networks with a small number of time-steps.用少量时间步训练更深的尖峰神经网络。
Neural Netw. 2022 Sep;153:254-268. doi: 10.1016/j.neunet.2022.06.001. Epub 2022 Jun 15.
10
Rethinking the performance comparison between SNNS and ANNS.重新思考 SNNS 和 ANNS 的性能比较。
Neural Netw. 2020 Jan;121:294-307. doi: 10.1016/j.neunet.2019.09.005. Epub 2019 Sep 19.

引用本文的文献

1
BN-SNN: Spiking neural networks with bistable neurons for object detection.BN-SNN:用于目标检测的具有双稳态神经元的脉冲神经网络
PLoS One. 2025 Jul 10;20(7):e0327513. doi: 10.1371/journal.pone.0327513. eCollection 2025.
2
An all integer-based spiking neural network with dynamic threshold adaptation.一种具有动态阈值自适应的全整数型脉冲神经网络。
Front Neurosci. 2024 Dec 17;18:1449020. doi: 10.3389/fnins.2024.1449020. eCollection 2024.
3
Learnable Leakage and Onset-Spiking Self-Attention in SNNs with Local Error Signals.具有局部误差信号的脉冲神经网络中的可学习泄漏和起始尖峰自注意力

本文引用的文献

1
BSNN: Towards faster and better conversion of artificial neural networks to spiking neural networks with bistable neurons.BSNN:利用双稳态神经元实现人工神经网络向脉冲神经网络更快、更好的转换
Front Neurosci. 2022 Oct 12;16:991851. doi: 10.3389/fnins.2022.991851. eCollection 2022.
2
Spiking Deep Residual Networks.尖峰深度残差网络
IEEE Trans Neural Netw Learn Syst. 2023 Aug;34(8):5200-5205. doi: 10.1109/TNNLS.2021.3119238. Epub 2023 Aug 4.
3
Neural Coding in Spiking Neural Networks: A Comparative Study for Robust Neuromorphic Systems.
Sensors (Basel). 2023 Dec 12;23(24):9781. doi: 10.3390/s23249781.
4
Incorporating structural plasticity into self-organization recurrent networks for sequence learning.将结构可塑性纳入用于序列学习的自组织递归网络。
Front Neurosci. 2023 Aug 1;17:1224752. doi: 10.3389/fnins.2023.1224752. eCollection 2023.
脉冲神经网络中的神经编码:对鲁棒神经形态系统的比较研究
Front Neurosci. 2021 Mar 4;15:638474. doi: 10.3389/fnins.2021.638474. eCollection 2021.
4
Enabling Spike-Based Backpropagation for Training Deep Neural Network Architectures.实现基于尖峰的反向传播以训练深度神经网络架构。
Front Neurosci. 2020 Feb 28;14:119. doi: 10.3389/fnins.2020.00119. eCollection 2020.
5
Towards spike-based machine intelligence with neuromorphic computing.迈向基于尖峰的机器智能的神经形态计算。
Nature. 2019 Nov;575(7784):607-617. doi: 10.1038/s41586-019-1677-2. Epub 2019 Nov 27.
6
Going Deeper in Spiking Neural Networks: VGG and Residual Architectures.深入探索脉冲神经网络:VGG和残差架构。
Front Neurosci. 2019 Mar 7;13:95. doi: 10.3389/fnins.2019.00095. eCollection 2019.
7
Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification.将连续值深度网络转换为用于图像分类的高效事件驱动网络
Front Neurosci. 2017 Dec 7;11:682. doi: 10.3389/fnins.2017.00682. eCollection 2017.
8
Supervised Learning Based on Temporal Coding in Spiking Neural Networks.基于脉冲神经网络中时间编码的监督学习。
IEEE Trans Neural Netw Learn Syst. 2018 Jul;29(7):3227-3235. doi: 10.1109/TNNLS.2017.2726060. Epub 2017 Aug 1.
9
Deep learning.深度学习。
Nature. 2015 May 28;521(7553):436-44. doi: 10.1038/nature14539.
10
Learning real-world stimuli in a neural network with spike-driven synaptic dynamics.在具有脉冲驱动突触动力学的神经网络中学习现实世界的刺激。
Neural Comput. 2007 Nov;19(11):2881-912. doi: 10.1162/neco.2007.19.11.2881.