• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于快速脉冲神经网络的可训练量化

Trainable quantization for Speedy Spiking Neural Networks.

作者信息

Castagnetti Andrea, Pegatoquet Alain, Miramond Benoît

机构信息

LEAT, Université Côte d'Azur, CNRS, Sophia Antipolis, France.

出版信息

Front Neurosci. 2023 Mar 3;17:1154241. doi: 10.3389/fnins.2023.1154241. eCollection 2023.

DOI:10.3389/fnins.2023.1154241
PMID:36937675
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10020579/
Abstract

Spiking neural networks are considered as the third generation of Artificial Neural Networks. SNNs perform computation using neurons and synapses that communicate using binary and asynchronous signals known as spikes. They have attracted significant research interest over the last years since their computing paradigm allows theoretically sparse and low-power operations. This hypothetical gain, used from the beginning of the neuromorphic research, was however limited by three main factors: the absence of an efficient learning rule competing with the one of classical deep learning, the lack of mature learning framework, and an important data processing latency finally generating energy overhead. While the first two limitations have recently been addressed in the literature, the major problem of latency is not solved yet. Indeed, information is not exchanged instantaneously between spiking neurons but gradually builds up over time as spikes are generated and propagated through the network. This paper focuses on quantization error, one of the main consequence of the SNN discrete representation of information. We argue that the quantization error is the main source of accuracy drop between ANN and SNN. In this article we propose an in-depth characterization of SNN quantization noise. We then propose a end-to-end direct learning approach based on a new trainable spiking neural model. This model allows adapting the threshold of neurons during training and implements efficient quantization strategies. This novel approach better explains the global behavior of SNNs and minimizes the quantization noise during training. The resulting SNN can be trained over a limited amount of timesteps, reducing latency, while beating state of the art accuracy and preserving high sparsity on the main datasets considered in the neuromorphic community.

摘要

脉冲神经网络被视为第三代人工神经网络。脉冲神经网络使用神经元和突触进行计算,这些神经元和突触通过称为脉冲的二进制和异步信号进行通信。自其计算范式理论上允许稀疏和低功耗操作以来,在过去几年中它们引起了广泛的研究兴趣。然而,从神经形态研究开始就使用的这种假设性优势受到三个主要因素的限制:缺乏与经典深度学习相竞争的有效学习规则、缺乏成熟的学习框架以及最终产生能量开销的重要数据处理延迟。虽然前两个限制最近在文献中得到了解决,但延迟的主要问题尚未解决。事实上,脉冲神经元之间的信息不是瞬间交换的,而是随着脉冲的产生和在网络中的传播随着时间逐渐积累的。本文关注量化误差,这是脉冲神经网络信息离散表示的主要后果之一。我们认为量化误差是人工神经网络和脉冲神经网络之间精度下降的主要来源。在本文中,我们对脉冲神经网络量化噪声进行了深入表征。然后,我们基于一种新的可训练脉冲神经模型提出了一种端到端的直接学习方法。该模型允许在训练期间调整神经元的阈值,并实现有效的量化策略。这种新颖的方法更好地解释了脉冲神经网络的全局行为,并在训练期间最小化量化噪声。由此产生的脉冲神经网络可以在有限数量的时间步上进行训练,减少延迟,同时在神经形态社区考虑的主要数据集上击败当前的精度并保持高稀疏性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fd57/10020579/8ba325c3f816/fnins-17-1154241-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fd57/10020579/1f03117639fc/fnins-17-1154241-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fd57/10020579/86cbbd02ccae/fnins-17-1154241-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fd57/10020579/d6e22de49664/fnins-17-1154241-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fd57/10020579/9ce208902bae/fnins-17-1154241-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fd57/10020579/8ba325c3f816/fnins-17-1154241-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fd57/10020579/1f03117639fc/fnins-17-1154241-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fd57/10020579/86cbbd02ccae/fnins-17-1154241-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fd57/10020579/d6e22de49664/fnins-17-1154241-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fd57/10020579/9ce208902bae/fnins-17-1154241-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fd57/10020579/8ba325c3f816/fnins-17-1154241-g0005.jpg

相似文献

1
Trainable quantization for Speedy Spiking Neural Networks.用于快速脉冲神经网络的可训练量化
Front Neurosci. 2023 Mar 3;17:1154241. doi: 10.3389/fnins.2023.1154241. eCollection 2023.
2
Quantization Framework for Fast Spiking Neural Networks.快速脉冲神经网络的量化框架
Front Neurosci. 2022 Jul 19;16:918793. doi: 10.3389/fnins.2022.918793. eCollection 2022.
3
SpQuant-SNN: ultra-low precision membrane potential with sparse activations unlock the potential of on-device spiking neural networks applications.SpQuant-SNN:具有稀疏激活的超低精度膜电位开启了片上脉冲神经网络应用的潜力。
Front Neurosci. 2024 Sep 4;18:1440000. doi: 10.3389/fnins.2024.1440000. eCollection 2024.
4
High-accuracy deep ANN-to-SNN conversion using quantization-aware training framework and calcium-gated bipolar leaky integrate and fire neuron.使用量化感知训练框架和钙门控双极泄漏积分发放神经元实现高精度深度人工神经网络到脉冲神经网络的转换。
Front Neurosci. 2023 Mar 8;17:1141701. doi: 10.3389/fnins.2023.1141701. eCollection 2023.
5
Fast-SNN: Fast Spiking Neural Network by Converting Quantized ANN.快速脉冲神经网络:通过量化人工神经网络转换实现的快速脉冲神经网络
IEEE Trans Pattern Anal Mach Intell. 2023 Dec;45(12):14546-14562. doi: 10.1109/TPAMI.2023.3275769. Epub 2023 Nov 3.
6
A Tandem Learning Rule for Effective Training and Rapid Inference of Deep Spiking Neural Networks.一种用于深度脉冲神经网络有效训练和快速推理的串联学习规则。
IEEE Trans Neural Netw Learn Syst. 2023 Jan;34(1):446-460. doi: 10.1109/TNNLS.2021.3095724. Epub 2023 Jan 5.
7
SPIDEN: deep Spiking Neural Networks for efficient image denoising.SPIDEN:用于高效图像去噪的深度脉冲神经网络。
Front Neurosci. 2023 Aug 11;17:1224457. doi: 10.3389/fnins.2023.1224457. eCollection 2023.
8
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training.SSTDP:用于高效脉冲神经网络训练的监督式脉冲时间依赖可塑性
Front Neurosci. 2021 Nov 4;15:756876. doi: 10.3389/fnins.2021.756876. eCollection 2021.
9
Backpropagation-Based Learning Techniques for Deep Spiking Neural Networks: A Survey.基于反向传播的深度学习尖峰神经网络学习技术综述。
IEEE Trans Neural Netw Learn Syst. 2024 Sep;35(9):11906-11921. doi: 10.1109/TNNLS.2023.3263008. Epub 2024 Sep 3.
10
ALBSNN: ultra-low latency adaptive local binary spiking neural network with accuracy loss estimator.ALBSNN:具有精度损失估计器的超低延迟自适应局部二值脉冲神经网络
Front Neurosci. 2023 Sep 13;17:1225871. doi: 10.3389/fnins.2023.1225871. eCollection 2023.

引用本文的文献

1
SpQuant-SNN: ultra-low precision membrane potential with sparse activations unlock the potential of on-device spiking neural networks applications.SpQuant-SNN:具有稀疏激活的超低精度膜电位开启了片上脉冲神经网络应用的潜力。
Front Neurosci. 2024 Sep 4;18:1440000. doi: 10.3389/fnins.2024.1440000. eCollection 2024.
2
SPIDEN: deep Spiking Neural Networks for efficient image denoising.SPIDEN:用于高效图像去噪的深度脉冲神经网络。
Front Neurosci. 2023 Aug 11;17:1224457. doi: 10.3389/fnins.2023.1224457. eCollection 2023.

本文引用的文献

1
Quantization Framework for Fast Spiking Neural Networks.快速脉冲神经网络的量化框架
Front Neurosci. 2022 Jul 19;16:918793. doi: 10.3389/fnins.2022.918793. eCollection 2022.
2
DIET-SNN: A Low-Latency Spiking Neural Network With Direct Input Encoding and Leakage and Threshold Optimization.DIET-SNN:一种具有直接输入编码以及泄漏和阈值优化的低延迟脉冲神经网络。
IEEE Trans Neural Netw Learn Syst. 2023 Jun;34(6):3174-3182. doi: 10.1109/TNNLS.2021.3111897. Epub 2023 Jun 1.
3
The Heidelberg Spiking Data Sets for the Systematic Evaluation of Spiking Neural Networks.
用于 Spike 神经网络系统评估的海德堡 Spike 数据集。
IEEE Trans Neural Netw Learn Syst. 2022 Jul;33(7):2744-2757. doi: 10.1109/TNNLS.2020.3044364. Epub 2022 Jul 6.
4
Design Space Exploration of Hardware Spiking Neurons for Embedded Artificial Intelligence.硬件尖峰神经元在嵌入式人工智能中的设计空间探索。
Neural Netw. 2020 Jan;121:366-386. doi: 10.1016/j.neunet.2019.09.024. Epub 2019 Sep 26.
5
Going Deeper in Spiking Neural Networks: VGG and Residual Architectures.深入探索脉冲神经网络:VGG和残差架构。
Front Neurosci. 2019 Mar 7;13:95. doi: 10.3389/fnins.2019.00095. eCollection 2019.
6
Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification.将连续值深度网络转换为用于图像分类的高效事件驱动网络
Front Neurosci. 2017 Dec 7;11:682. doi: 10.3389/fnins.2017.00682. eCollection 2017.
7
Unsupervised learning of digit recognition using spike-timing-dependent plasticity.使用基于脉冲时间依赖可塑性的无监督数字识别学习。
Front Comput Neurosci. 2015 Aug 3;9:99. doi: 10.3389/fncom.2015.00099. eCollection 2015.