• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用预充电膜电位和延迟评估的低延迟脉冲神经网络。

Low-Latency Spiking Neural Networks Using Pre-Charged Membrane Potential and Delayed Evaluation.

作者信息

Hwang Sungmin, Chang Jeesoo, Oh Min-Hye, Min Kyung Kyu, Jang Taejin, Park Kyungchul, Yu Junsu, Lee Jong-Ho, Park Byung-Gook

机构信息

Inter-university Semiconductor Research Center (ISRC) and Department of Electrical and Computer Engineering, Seoul National University, Seoul, South Korea.

出版信息

Front Neurosci. 2021 Feb 18;15:629000. doi: 10.3389/fnins.2021.629000. eCollection 2021.

DOI:10.3389/fnins.2021.629000
PMID:33679308
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7935527/
Abstract

Spiking neural networks (SNNs) have attracted many researchers' interests due to its biological plausibility and event-driven characteristic. In particular, recently, many studies on high-performance SNNs comparable to the conventional analog-valued neural networks (ANNs) have been reported by converting weights trained from ANNs into SNNs. However, unlike ANNs, SNNs have an inherent latency that is required to reach the best performance because of differences in operations of neuron. In SNNs, not only spatial integration but also temporal integration exists, and the information is encoded by spike trains rather than values in ANNs. Therefore, it takes time to achieve a steady-state of the performance in SNNs. The latency is worse in deep networks and required to be reduced for the practical applications. In this work, we propose a pre-charged membrane potential () for the latency reduction in SNN. A variety of neural network applications (e.g., classification, autoencoder using MNIST and CIFAR-10 datasets) are trained and converted to SNNs to demonstrate the effect of the proposed approach. The latency of SNNs is successfully reduced without accuracy loss. In addition, we propose a delayed evaluation method (), by which the errors during the initial transient are discarded. The error spikes occurring in the initial transient is removed by , resulting in the further latency reduction. can be used in combination with for further latency reduction. Finally, we also show the advantages of the proposed methods in improving the number of spikes required to reach a steady-state of the performance in SNNs for energy-efficient computing.

摘要

脉冲神经网络(SNN)因其生物学合理性和事件驱动特性吸引了众多研究人员的关注。特别是近年来,通过将从传统模拟值神经网络(ANN)训练得到的权重转换为SNN,已经有许多关于与传统ANN性能相当的高性能SNN的研究报道。然而,与ANN不同,SNN由于神经元操作的差异,具有达到最佳性能所需的固有延迟。在SNN中,不仅存在空间整合,还存在时间整合,并且信息是通过脉冲序列而不是ANN中的值进行编码的。因此,SNN需要时间来达到性能的稳态。在深度网络中延迟更严重,为了实际应用需要降低延迟。在这项工作中,我们提出了一种预充电膜电位()来减少SNN中的延迟。通过训练各种神经网络应用(例如,使用MNIST和CIFAR - 10数据集的分类、自动编码器)并将其转换为SNN,以证明所提出方法的效果。成功降低了SNN的延迟而没有精度损失。此外,我们提出了一种延迟评估方法(),通过该方法丢弃初始瞬态期间的误差。通过 消除了初始瞬态中出现的误差尖峰,从而进一步降低了延迟。 可以与 结合使用以进一步降低延迟。最后,我们还展示了所提出方法在提高SNN达到性能稳态所需的脉冲数量以实现节能计算方面的优势。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/0c10069f4e0d/fnins-15-629000-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/a6e333aca66e/fnins-15-629000-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/c373660c5c45/fnins-15-629000-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/c4cb1014a7e6/fnins-15-629000-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/51ff3a7777b4/fnins-15-629000-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/9aabffc527f9/fnins-15-629000-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/1353939f23bd/fnins-15-629000-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/d11cfd2f1269/fnins-15-629000-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/4aa6af56d7f6/fnins-15-629000-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/161878f613b0/fnins-15-629000-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/51c3a527381d/fnins-15-629000-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/1efe7cb65e67/fnins-15-629000-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/7c3b24cbce6a/fnins-15-629000-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/0c10069f4e0d/fnins-15-629000-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/a6e333aca66e/fnins-15-629000-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/c373660c5c45/fnins-15-629000-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/c4cb1014a7e6/fnins-15-629000-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/51ff3a7777b4/fnins-15-629000-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/9aabffc527f9/fnins-15-629000-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/1353939f23bd/fnins-15-629000-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/d11cfd2f1269/fnins-15-629000-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/4aa6af56d7f6/fnins-15-629000-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/161878f613b0/fnins-15-629000-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/51c3a527381d/fnins-15-629000-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/1efe7cb65e67/fnins-15-629000-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/7c3b24cbce6a/fnins-15-629000-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/0c10069f4e0d/fnins-15-629000-g013.jpg

相似文献

1
Low-Latency Spiking Neural Networks Using Pre-Charged Membrane Potential and Delayed Evaluation.使用预充电膜电位和延迟评估的低延迟脉冲神经网络。
Front Neurosci. 2021 Feb 18;15:629000. doi: 10.3389/fnins.2021.629000. eCollection 2021.
2
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training.SSTDP:用于高效脉冲神经网络训练的监督式脉冲时间依赖可塑性
Front Neurosci. 2021 Nov 4;15:756876. doi: 10.3389/fnins.2021.756876. eCollection 2021.
3
Spiking Deep Residual Networks.尖峰深度残差网络
IEEE Trans Neural Netw Learn Syst. 2023 Aug;34(8):5200-5205. doi: 10.1109/TNNLS.2021.3119238. Epub 2023 Aug 4.
4
Quantization Framework for Fast Spiking Neural Networks.快速脉冲神经网络的量化框架
Front Neurosci. 2022 Jul 19;16:918793. doi: 10.3389/fnins.2022.918793. eCollection 2022.
5
Enabling Spike-Based Backpropagation for Training Deep Neural Network Architectures.实现基于尖峰的反向传播以训练深度神经网络架构。
Front Neurosci. 2020 Feb 28;14:119. doi: 10.3389/fnins.2020.00119. eCollection 2020.
6
Gradient-based feature-attribution explainability methods for spiking neural networks.基于梯度的脉冲神经网络特征归因可解释性方法
Front Neurosci. 2023 Sep 27;17:1153999. doi: 10.3389/fnins.2023.1153999. eCollection 2023.
7
Rethinking the performance comparison between SNNS and ANNS.重新思考 SNNS 和 ANNS 的性能比较。
Neural Netw. 2020 Jan;121:294-307. doi: 10.1016/j.neunet.2019.09.005. Epub 2019 Sep 19.
8
Fast-SNN: Fast Spiking Neural Network by Converting Quantized ANN.快速脉冲神经网络:通过量化人工神经网络转换实现的快速脉冲神经网络
IEEE Trans Pattern Anal Mach Intell. 2023 Dec;45(12):14546-14562. doi: 10.1109/TPAMI.2023.3275769. Epub 2023 Nov 3.
9
A universal ANN-to-SNN framework for achieving high accuracy and low latency deep Spiking Neural Networks.一种通用的 ANN-to-SNN 框架,可实现高精度和低延迟的深度尖峰神经网络。
Neural Netw. 2024 Jun;174:106244. doi: 10.1016/j.neunet.2024.106244. Epub 2024 Mar 15.
10
Efficient Processing of Spatio-Temporal Data Streams With Spiking Neural Networks.基于脉冲神经网络的时空数据流高效处理
Front Neurosci. 2020 May 5;14:439. doi: 10.3389/fnins.2020.00439. eCollection 2020.

引用本文的文献

1
BN-SNN: Spiking neural networks with bistable neurons for object detection.BN-SNN:用于目标检测的具有双稳态神经元的脉冲神经网络
PLoS One. 2025 Jul 10;20(7):e0327513. doi: 10.1371/journal.pone.0327513. eCollection 2025.
2
BayesianSpikeFusion: accelerating spiking neural network inference via Bayesian fusion of early prediction.贝叶斯脉冲融合:通过早期预测的贝叶斯融合加速脉冲神经网络推理
Front Neurosci. 2024 Aug 5;18:1420119. doi: 10.3389/fnins.2024.1420119. eCollection 2024.
3
Memcapacitor Crossbar Array with Charge Trap NAND Flash Structure for Neuromorphic Computing.

本文引用的文献

1
Spiking Deep Residual Networks.尖峰深度残差网络
IEEE Trans Neural Netw Learn Syst. 2023 Aug;34(8):5200-5205. doi: 10.1109/TNNLS.2021.3119238. Epub 2023 Aug 4.
2
Impact of the Sub-Resting Membrane Potential on Accurate Inference in Spiking Neural Networks.亚静息膜电位对尖峰神经网络准确推断的影响。
Sci Rep. 2020 Feb 26;10(1):3515. doi: 10.1038/s41598-020-60572-8.
3
Going Deeper in Spiking Neural Networks: VGG and Residual Architectures.深入探索脉冲神经网络:VGG和残差架构。
用于神经形态计算的具有电荷陷阱型NAND闪存结构的忆阻器交叉阵列
Adv Sci (Weinh). 2023 Nov;10(32):e2303817. doi: 10.1002/advs.202303817. Epub 2023 Sep 26.
4
Quantization Framework for Fast Spiking Neural Networks.快速脉冲神经网络的量化框架
Front Neurosci. 2022 Jul 19;16:918793. doi: 10.3389/fnins.2022.918793. eCollection 2022.
Front Neurosci. 2019 Mar 7;13:95. doi: 10.3389/fnins.2019.00095. eCollection 2019.
4
Deep learning in spiking neural networks.深度学习在尖峰神经网络中的应用。
Neural Netw. 2019 Mar;111:47-63. doi: 10.1016/j.neunet.2018.12.002. Epub 2018 Dec 18.
5
SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks.超级脉冲:多层脉冲神经网络中的监督学习
Neural Comput. 2018 Jun;30(6):1514-1541. doi: 10.1162/neco_a_01086. Epub 2018 Apr 13.
6
Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification.将连续值深度网络转换为用于图像分类的高效事件驱动网络
Front Neurosci. 2017 Dec 7;11:682. doi: 10.3389/fnins.2017.00682. eCollection 2017.
7
Deep learning.深度学习。
Nature. 2015 May 28;521(7553):436-44. doi: 10.1038/nature14539.
8
Deep learning in neural networks: an overview.神经网络中的深度学习:综述。
Neural Netw. 2015 Jan;61:85-117. doi: 10.1016/j.neunet.2014.09.003. Epub 2014 Oct 13.
9
Artificial brains. A million spiking-neuron integrated circuit with a scalable communication network and interface.人工大脑。具有可扩展通信网络和接口的 100 万个尖峰神经元集成电路。
Science. 2014 Aug 8;345(6197):668-73. doi: 10.1126/science.1254642. Epub 2014 Aug 7.
10
Introduction to spiking neural networks: Information processing, learning and applications.脉冲神经网络简介:信息处理、学习与应用
Acta Neurobiol Exp (Wars). 2011;71(4):409-33. doi: 10.55782/ane-2011-1862.