• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种具有动态阈值自适应的全整数型脉冲神经网络。

An all integer-based spiking neural network with dynamic threshold adaptation.

作者信息

Zou Chenglong, Cui Xiaoxin, Feng Shuo, Chen Guang, Zhong Yi, Dai Zhenhui, Wang Yuan

机构信息

Peking University Chongqing Research Institute of Big Data, Chongqing, China.

School of Mathematical Science, Peking University, Beijing, China.

出版信息

Front Neurosci. 2024 Dec 17;18:1449020. doi: 10.3389/fnins.2024.1449020. eCollection 2024.

DOI:10.3389/fnins.2024.1449020
PMID:39741532
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11685137/
Abstract

Spiking Neural Networks (SNNs) are typically regards as the third generation of neural networks due to their inherent event-driven computing capabilities and remarkable energy efficiency. However, training an SNN that possesses fast inference speed and comparable accuracy to modern artificial neural networks (ANNs) remains a considerable challenge. In this article, a sophisticated SNN modeling algorithm incorporating a novel dynamic threshold adaptation mechanism is proposed. It aims to eliminate the spiking synchronization error commonly occurred in many traditional ANN2SNN conversion works. Additionally, all variables in the proposed SNNs, including the membrane potential, threshold and synaptic weights, are quantized to integers, making them highly compatible with hardware implementation. Experimental results indicate that the proposed spiking LeNet and VGG-Net achieve accuracies exceeding 99.45% and 93.15% on the MNIST and CIFAR-10 datasets, respectively, with only 4 and 8 time steps required for simulating one sample. Due to this all integer-based quantization process, the required computational operations are significantly reduced, potentially providing a substantial energy efficiency advantage for numerous edge computing applications.

摘要

脉冲神经网络(SNNs)由于其固有的事件驱动计算能力和显著的能源效率,通常被视为第三代神经网络。然而,训练一个具有快速推理速度且精度与现代人工神经网络(ANNs)相当的SNN仍然是一个相当大的挑战。在本文中,提出了一种复杂的SNN建模算法,该算法结合了一种新颖的动态阈值自适应机制。其目的是消除许多传统ANN2SNN转换工作中常见的脉冲同步误差。此外,所提出的SNN中的所有变量,包括膜电位、阈值和突触权重,都被量化为整数,使其与硬件实现高度兼容。实验结果表明,所提出的脉冲LeNet和VGG-Net在MNIST和CIFAR-10数据集上分别实现了超过99.45%和93.15%的准确率,模拟一个样本仅需4步和8步时间。由于这种基于整数的量化过程,所需的计算操作显著减少,这可能为众多边缘计算应用提供显著的能源效率优势。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5d/11685137/cd18aafb8c69/fnins-18-1449020-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5d/11685137/dc93c6f71816/fnins-18-1449020-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5d/11685137/5af6d7320b12/fnins-18-1449020-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5d/11685137/cc9ae5648877/fnins-18-1449020-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5d/11685137/cd18aafb8c69/fnins-18-1449020-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5d/11685137/dc93c6f71816/fnins-18-1449020-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5d/11685137/5af6d7320b12/fnins-18-1449020-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5d/11685137/cc9ae5648877/fnins-18-1449020-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5d/11685137/cd18aafb8c69/fnins-18-1449020-g0004.jpg

相似文献

1
An all integer-based spiking neural network with dynamic threshold adaptation.一种具有动态阈值自适应的全整数型脉冲神经网络。
Front Neurosci. 2024 Dec 17;18:1449020. doi: 10.3389/fnins.2024.1449020. eCollection 2024.
2
A Scatter-and-Gather Spiking Convolutional Neural Network on a Reconfigurable Neuromorphic Hardware.一种基于可重构神经形态硬件的散射与聚集脉冲卷积神经网络。
Front Neurosci. 2021 Nov 16;15:694170. doi: 10.3389/fnins.2021.694170. eCollection 2021.
3
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training.SSTDP:用于高效脉冲神经网络训练的监督式脉冲时间依赖可塑性
Front Neurosci. 2021 Nov 4;15:756876. doi: 10.3389/fnins.2021.756876. eCollection 2021.
4
SpQuant-SNN: ultra-low precision membrane potential with sparse activations unlock the potential of on-device spiking neural networks applications.SpQuant-SNN:具有稀疏激活的超低精度膜电位开启了片上脉冲神经网络应用的潜力。
Front Neurosci. 2024 Sep 4;18:1440000. doi: 10.3389/fnins.2024.1440000. eCollection 2024.
5
Fast-SNN: Fast Spiking Neural Network by Converting Quantized ANN.快速脉冲神经网络:通过量化人工神经网络转换实现的快速脉冲神经网络
IEEE Trans Pattern Anal Mach Intell. 2023 Dec;45(12):14546-14562. doi: 10.1109/TPAMI.2023.3275769. Epub 2023 Nov 3.
6
Low-Latency Spiking Neural Networks Using Pre-Charged Membrane Potential and Delayed Evaluation.使用预充电膜电位和延迟评估的低延迟脉冲神经网络。
Front Neurosci. 2021 Feb 18;15:629000. doi: 10.3389/fnins.2021.629000. eCollection 2021.
7
High-accuracy deep ANN-to-SNN conversion using quantization-aware training framework and calcium-gated bipolar leaky integrate and fire neuron.使用量化感知训练框架和钙门控双极泄漏积分发放神经元实现高精度深度人工神经网络到脉冲神经网络的转换。
Front Neurosci. 2023 Mar 8;17:1141701. doi: 10.3389/fnins.2023.1141701. eCollection 2023.
8
Training much deeper spiking neural networks with a small number of time-steps.用少量时间步训练更深的尖峰神经网络。
Neural Netw. 2022 Sep;153:254-268. doi: 10.1016/j.neunet.2022.06.001. Epub 2022 Jun 15.
9
Neuromorphic Sentiment Analysis Using Spiking Neural Networks.基于尖峰神经网络的神经形态情绪分析。
Sensors (Basel). 2023 Sep 6;23(18):7701. doi: 10.3390/s23187701.
10
Exploring Optimized Spiking Neural Network Architectures for Classification Tasks on Embedded Platforms.探索用于嵌入式平台上分类任务的优化尖峰神经网络架构。
Sensors (Basel). 2021 May 7;21(9):3240. doi: 10.3390/s21093240.

本文引用的文献

1
Fast-SNN: Fast Spiking Neural Network by Converting Quantized ANN.快速脉冲神经网络:通过量化人工神经网络转换实现的快速脉冲神经网络
IEEE Trans Pattern Anal Mach Intell. 2023 Dec;45(12):14546-14562. doi: 10.1109/TPAMI.2023.3275769. Epub 2023 Nov 3.
2
Backpropagation-Based Learning Techniques for Deep Spiking Neural Networks: A Survey.基于反向传播的深度学习尖峰神经网络学习技术综述。
IEEE Trans Neural Netw Learn Syst. 2024 Sep;35(9):11906-11921. doi: 10.1109/TNNLS.2023.3263008. Epub 2024 Sep 3.
3
High-accuracy deep ANN-to-SNN conversion using quantization-aware training framework and calcium-gated bipolar leaky integrate and fire neuron.
使用量化感知训练框架和钙门控双极泄漏积分发放神经元实现高精度深度人工神经网络到脉冲神经网络的转换。
Front Neurosci. 2023 Mar 8;17:1141701. doi: 10.3389/fnins.2023.1141701. eCollection 2023.
4
Quantization Framework for Fast Spiking Neural Networks.快速脉冲神经网络的量化框架
Front Neurosci. 2022 Jul 19;16:918793. doi: 10.3389/fnins.2022.918793. eCollection 2022.
5
Training much deeper spiking neural networks with a small number of time-steps.用少量时间步训练更深的尖峰神经网络。
Neural Netw. 2022 Sep;153:254-268. doi: 10.1016/j.neunet.2022.06.001. Epub 2022 Jun 15.
6
A Scatter-and-Gather Spiking Convolutional Neural Network on a Reconfigurable Neuromorphic Hardware.一种基于可重构神经形态硬件的散射与聚集脉冲卷积神经网络。
Front Neurosci. 2021 Nov 16;15:694170. doi: 10.3389/fnins.2021.694170. eCollection 2021.
7
A review of learning in biologically plausible spiking neural networks.生物启发式尖峰神经网络学习的综述。
Neural Netw. 2020 Feb;122:253-272. doi: 10.1016/j.neunet.2019.09.036. Epub 2019 Oct 11.
8
Going Deeper in Spiking Neural Networks: VGG and Residual Architectures.深入探索脉冲神经网络:VGG和残差架构。
Front Neurosci. 2019 Mar 7;13:95. doi: 10.3389/fnins.2019.00095. eCollection 2019.
9
Deep learning in spiking neural networks.深度学习在尖峰神经网络中的应用。
Neural Netw. 2019 Mar;111:47-63. doi: 10.1016/j.neunet.2018.12.002. Epub 2018 Dec 18.
10
Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification.将连续值深度网络转换为用于图像分类的高效事件驱动网络
Front Neurosci. 2017 Dec 7;11:682. doi: 10.3389/fnins.2017.00682. eCollection 2017.