• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于高效内存脉冲神经网络的共享泄漏积分发放神经元

Sharing leaky-integrate-and-fire neurons for memory-efficient spiking neural networks.

作者信息

Kim Youngeun, Li Yuhang, Moitra Abhishek, Yin Ruokai, Panda Priyadarshini

机构信息

Department of Electrical Engineering, Yale University, New Haven, CT, United States.

出版信息

Front Neurosci. 2023 Jul 31;17:1230002. doi: 10.3389/fnins.2023.1230002. eCollection 2023.

DOI:10.3389/fnins.2023.1230002
PMID:37583415
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10423932/
Abstract

Spiking Neural Networks (SNNs) have gained increasing attention as energy-efficient neural networks owing to their binary and asynchronous computation. However, their non-linear activation, that is Leaky-Integrate-and-Fire (LIF) neuron, requires additional memory to store a membrane voltage to capture the temporal dynamics of spikes. Although the required memory cost for LIF neurons significantly increases as the input dimension goes larger, a technique to reduce memory for LIF neurons has not been explored so far. To address this, we propose a simple and effective solution, EfficientLIF-Net, which shares the LIF neurons across different layers and channels. Our EfficientLIF-Net achieves comparable accuracy with the standard SNNs while bringing up to ~4.3× forward memory efficiency and ~21.9× backward memory efficiency for LIF neurons. We conduct experiments on various datasets including CIFAR10, CIFAR100, TinyImageNet, ImageNet-100, and N-Caltech101. Furthermore, we show that our approach also offers advantages on Human Activity Recognition (HAR) datasets, which heavily rely on temporal information. The code has been released at https://github.com/Intelligent-Computing-Lab-Yale/EfficientLIF-Net.

摘要

脉冲神经网络(SNNs)因其二进制和异步计算方式,作为节能型神经网络受到了越来越多的关注。然而,其非线性激活函数,即泄漏积分发放(LIF)神经元,需要额外的内存来存储膜电压,以捕捉脉冲的时间动态。尽管随着输入维度的增大,LIF神经元所需的内存成本会显著增加,但目前尚未探索出一种减少LIF神经元内存的技术。为了解决这个问题,我们提出了一种简单有效的解决方案——高效LIF网络(EfficientLIF-Net),它在不同层和通道之间共享LIF神经元。我们的高效LIF网络在实现与标准SNNs相当准确率的同时,为LIF神经元带来了高达约4.3倍的前向内存效率和约21.9倍的反向内存效率。我们在包括CIFAR10、CIFAR100、TinyImageNet、ImageNet-100和N-Caltech101在内的各种数据集上进行了实验。此外,我们还表明,我们的方法在严重依赖时间信息的人类活动识别(HAR)数据集上也具有优势。代码已发布在https://github.com/Intelligent-Computing-Lab-Yale/EfficientLIF-Net 。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c7b/10423932/32199859fadd/fnins-17-1230002-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c7b/10423932/a4cce6440fa1/fnins-17-1230002-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c7b/10423932/efe740bc26da/fnins-17-1230002-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c7b/10423932/65348a21abe4/fnins-17-1230002-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c7b/10423932/b5b94b894006/fnins-17-1230002-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c7b/10423932/b99d3309bf7b/fnins-17-1230002-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c7b/10423932/71c4dddc9a4c/fnins-17-1230002-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c7b/10423932/989c7e6fbaee/fnins-17-1230002-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c7b/10423932/4117363633f2/fnins-17-1230002-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c7b/10423932/32199859fadd/fnins-17-1230002-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c7b/10423932/a4cce6440fa1/fnins-17-1230002-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c7b/10423932/efe740bc26da/fnins-17-1230002-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c7b/10423932/65348a21abe4/fnins-17-1230002-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c7b/10423932/b5b94b894006/fnins-17-1230002-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c7b/10423932/b99d3309bf7b/fnins-17-1230002-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c7b/10423932/71c4dddc9a4c/fnins-17-1230002-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c7b/10423932/989c7e6fbaee/fnins-17-1230002-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c7b/10423932/4117363633f2/fnins-17-1230002-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c7b/10423932/32199859fadd/fnins-17-1230002-g0009.jpg

相似文献

1
Sharing leaky-integrate-and-fire neurons for memory-efficient spiking neural networks.用于高效内存脉冲神经网络的共享泄漏积分发放神经元
Front Neurosci. 2023 Jul 31;17:1230002. doi: 10.3389/fnins.2023.1230002. eCollection 2023.
2
Revisiting Batch Normalization for Training Low-Latency Deep Spiking Neural Networks From Scratch.从头开始训练低延迟深度脉冲神经网络时重新审视批量归一化
Front Neurosci. 2021 Dec 9;15:773954. doi: 10.3389/fnins.2021.773954. eCollection 2021.
3
LIAF-Net: Leaky Integrate and Analog Fire Network for Lightweight and Efficient Spatiotemporal Information Processing.LIAF-Net:用于轻量级和高效时空信息处理的漏积分和模拟火灾网络。
IEEE Trans Neural Netw Learn Syst. 2022 Nov;33(11):6249-6262. doi: 10.1109/TNNLS.2021.3073016. Epub 2022 Oct 27.
4
Optimizing Deeper Spiking Neural Networks for Dynamic Vision Sensing.深度尖峰神经网络在动态视觉传感中的优化。
Neural Netw. 2021 Dec;144:686-698. doi: 10.1016/j.neunet.2021.09.022. Epub 2021 Oct 5.
5
Electrocardiography Classification with Leaky Integrate-and-Fire Neurons in an Artificial Neural Network-Inspired Spiking Neural Network Framework.基于人工神经网络启发的尖峰神经网络框架的漏电积分和放电神经元的心电图分类。
Sensors (Basel). 2024 May 26;24(11):3426. doi: 10.3390/s24113426.
6
Enabling Spike-Based Backpropagation for Training Deep Neural Network Architectures.实现基于尖峰的反向传播以训练深度神经网络架构。
Front Neurosci. 2020 Feb 28;14:119. doi: 10.3389/fnins.2020.00119. eCollection 2020.
7
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training.SSTDP:用于高效脉冲神经网络训练的监督式脉冲时间依赖可塑性
Front Neurosci. 2021 Nov 4;15:756876. doi: 10.3389/fnins.2021.756876. eCollection 2021.
8
Spiking CMOS-NVM mixed-signal neuromorphic ConvNet with circuit- and training-optimized temporal subsampling.具有电路和训练优化时间下采样的尖峰CMOS-NVM混合信号神经形态卷积网络
Front Neurosci. 2023 Jul 18;17:1177592. doi: 10.3389/fnins.2023.1177592. eCollection 2023.
9
Reconstruction of Adaptive Leaky Integrate-and-Fire Neuron to Enhance the Spiking Neural Networks Performance by Establishing Complex Dynamics.通过建立复杂动力学重构自适应泄漏积分发放神经元以增强脉冲神经网络性能
IEEE Trans Neural Netw Learn Syst. 2025 Feb;36(2):2619-2633. doi: 10.1109/TNNLS.2023.3336690. Epub 2025 Feb 6.
10
Efficient human activity recognition with spatio-temporal spiking neural networks.基于时空脉冲神经网络的高效人类活动识别
Front Neurosci. 2023 Sep 14;17:1233037. doi: 10.3389/fnins.2023.1233037. eCollection 2023.

引用本文的文献

1
Neuromorphic algorithms for brain implants: a review.用于脑植入物的神经形态算法:综述
Front Neurosci. 2025 Apr 11;19:1570104. doi: 10.3389/fnins.2025.1570104. eCollection 2025.

本文引用的文献

1
Quantization Framework for Fast Spiking Neural Networks.快速脉冲神经网络的量化框架
Front Neurosci. 2022 Jul 19;16:918793. doi: 10.3389/fnins.2022.918793. eCollection 2022.
2
Lead federated neuromorphic learning for wireless edge artificial intelligence.领导联邦神经形态学习以实现无线边缘人工智能。
Nat Commun. 2022 Jul 25;13(1):4269. doi: 10.1038/s41467-022-32020-w.
3
Comprehensive SNN Compression Using ADMM Optimization and Activity Regularization.基于 ADMM 优化和活动正则化的全面 SNN 压缩。
IEEE Trans Neural Netw Learn Syst. 2023 Jun;34(6):2791-2805. doi: 10.1109/TNNLS.2021.3109064. Epub 2023 Jun 1.
4
Progressive Tandem Learning for Pattern Recognition With Deep Spiking Neural Networks.深度尖峰神经网络中用于模式识别的渐进式串联学习。
IEEE Trans Pattern Anal Mach Intell. 2022 Nov;44(11):7824-7840. doi: 10.1109/TPAMI.2021.3114196. Epub 2022 Oct 4.
5
Unsupervised Adaptive Weight Pruning for Energy-Efficient Neuromorphic Systems.用于节能神经形态系统的无监督自适应权重修剪
Front Neurosci. 2020 Nov 12;14:598876. doi: 10.3389/fnins.2020.598876. eCollection 2020.
6
Enabling Spike-Based Backpropagation for Training Deep Neural Network Architectures.实现基于尖峰的反向传播以训练深度神经网络架构。
Front Neurosci. 2020 Feb 28;14:119. doi: 10.3389/fnins.2020.00119. eCollection 2020.
7
Towards spike-based machine intelligence with neuromorphic computing.迈向基于尖峰的机器智能的神经形态计算。
Nature. 2019 Nov;575(7784):607-617. doi: 10.1038/s41586-019-1677-2. Epub 2019 Nov 27.
8
A Soft-Pruning Method Applied During Training of Spiking Neural Networks for In-memory Computing Applications.一种在用于内存计算应用的脉冲神经网络训练期间应用的软剪枝方法。
Front Neurosci. 2019 Apr 26;13:405. doi: 10.3389/fnins.2019.00405. eCollection 2019.
9
Coarse-Fine Convolutional Deep-Learning Strategy for Human Activity Recognition.粗-细卷积深度学习策略在人体活动识别中的应用。
Sensors (Basel). 2019 Mar 31;19(7):1556. doi: 10.3390/s19071556.
10
Going Deeper in Spiking Neural Networks: VGG and Residual Architectures.深入探索脉冲神经网络:VGG和残差架构。
Front Neurosci. 2019 Mar 7;13:95. doi: 10.3389/fnins.2019.00095. eCollection 2019.