• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于高效脉冲神经网络的动态时空剪枝

Dynamic spatio-temporal pruning for efficient spiking neural networks.

作者信息

Gou Shuiping, Fu Jiahui, Sha Yu, Cao Zhen, Guo Zhang, Eshraghian Jason K, Li Ruimin, Jiao Licheng

机构信息

Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, China.

Department of Electrical and Computer Engineering, University of California, Santa Cruz, Santa Cruz, CA, United States.

出版信息

Front Neurosci. 2025 Mar 25;19:1545583. doi: 10.3389/fnins.2025.1545583. eCollection 2025.

DOI:10.3389/fnins.2025.1545583
PMID:40201191
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11975901/
Abstract

Spiking neural networks (SNNs), which draw from biological neuron models, have the potential to improve the computational efficiency of artificial neural networks (ANNs) due to their event-driven nature and sparse data flow. SNNs rely on dynamical sparsity, in that neurons are trained to activate sparsely to minimize data communication. This is critical when accounting for hardware given the bandwidth limitations between memory and processor. Given that neurons are sparsely activated, weights are less frequently accessed, and potentially can be pruned to less performance degradation in a SNN compared to an equivalent ANN counterpart. Reducing the number of synaptic connections between neurons also relaxes memory demands for neuromorphic processors. In this paper, we propose a spatio-temporal pruning algorithm that dynamically adapts to reduce the temporal redundancy that often exists in SNNs when processing Dynamic Vision Sensor (DVS) datasets. Spatial pruning is executed based on both global parameter statistics and inter-layer parameter count and is shown to reduce model degradation under extreme sparsity. We provide an ablation study that isolates the various components of spatio-temporal pruning, and find that our approach achieves excellent performance across all datasets, with especially high performance on datasets with time-varying features. We achieved a 0.69% improvement on the DVS128 Gesture dataset, despite the common expectation that pruning typically degrades performance. Notably, this enhancement comes with an impressive 98.18% reduction in parameter space and a 50% reduction in time redundancy.

摘要

脉冲神经网络(SNN)借鉴了生物神经元模型,由于其事件驱动的特性和稀疏的数据流,有潜力提高人工神经网络(ANN)的计算效率。SNN依赖于动态稀疏性,即神经元被训练为稀疏激活以最小化数据通信。考虑到内存和处理器之间的带宽限制,在涉及硬件时这一点至关重要。鉴于神经元是稀疏激活的,权重的访问频率较低,并且与等效的ANN相比,在SNN中权重可能被修剪而性能下降较小。减少神经元之间的突触连接数量也减轻了对神经形态处理器的内存需求。在本文中,我们提出了一种时空修剪算法,该算法可动态适应以减少在处理动态视觉传感器(DVS)数据集时SNN中经常存在的时间冗余。基于全局参数统计和层间参数计数执行空间修剪,并显示在极端稀疏情况下可减少模型退化。我们提供了一项消融研究,隔离了时空修剪的各个组件,发现我们的方法在所有数据集上都取得了优异的性能,在具有时变特征的数据集上表现尤其出色。尽管通常认为修剪会降低性能,但我们在DVS128手势数据集上实现了0.69%的性能提升。值得注意的是,这种提升伴随着参数空间令人印象深刻的98.18%的减少和时间冗余50%的减少。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5d7/11975901/43f5ad44291d/fnins-19-1545583-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5d7/11975901/bebb7bd8e091/fnins-19-1545583-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5d7/11975901/10d2d618ad6b/fnins-19-1545583-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5d7/11975901/5fa7b728f8f2/fnins-19-1545583-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5d7/11975901/8109c7d2c83c/fnins-19-1545583-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5d7/11975901/ffb0d0ce3955/fnins-19-1545583-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5d7/11975901/89bc1c778a7f/fnins-19-1545583-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5d7/11975901/5ecba9fe26a0/fnins-19-1545583-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5d7/11975901/43f5ad44291d/fnins-19-1545583-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5d7/11975901/bebb7bd8e091/fnins-19-1545583-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5d7/11975901/10d2d618ad6b/fnins-19-1545583-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5d7/11975901/5fa7b728f8f2/fnins-19-1545583-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5d7/11975901/8109c7d2c83c/fnins-19-1545583-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5d7/11975901/ffb0d0ce3955/fnins-19-1545583-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5d7/11975901/89bc1c778a7f/fnins-19-1545583-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5d7/11975901/5ecba9fe26a0/fnins-19-1545583-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c5d7/11975901/43f5ad44291d/fnins-19-1545583-g0008.jpg

相似文献

1
Dynamic spatio-temporal pruning for efficient spiking neural networks.用于高效脉冲神经网络的动态时空剪枝
Front Neurosci. 2025 Mar 25;19:1545583. doi: 10.3389/fnins.2025.1545583. eCollection 2025.
2
SpQuant-SNN: ultra-low precision membrane potential with sparse activations unlock the potential of on-device spiking neural networks applications.SpQuant-SNN:具有稀疏激活的超低精度膜电位开启了片上脉冲神经网络应用的潜力。
Front Neurosci. 2024 Sep 4;18:1440000. doi: 10.3389/fnins.2024.1440000. eCollection 2024.
3
Unsupervised Adaptive Weight Pruning for Energy-Efficient Neuromorphic Systems.用于节能神经形态系统的无监督自适应权重修剪
Front Neurosci. 2020 Nov 12;14:598876. doi: 10.3389/fnins.2020.598876. eCollection 2020.
4
STSC-SNN: Spatio-Temporal Synaptic Connection with temporal convolution and attention for spiking neural networks.STSC-SNN:用于脉冲神经网络的具有时间卷积和注意力机制的时空突触连接
Front Neurosci. 2022 Dec 23;16:1079357. doi: 10.3389/fnins.2022.1079357. eCollection 2022.
5
Optimizing Deeper Spiking Neural Networks for Dynamic Vision Sensing.深度尖峰神经网络在动态视觉传感中的优化。
Neural Netw. 2021 Dec;144:686-698. doi: 10.1016/j.neunet.2021.09.022. Epub 2021 Oct 5.
6
Efficient Processing of Spatio-Temporal Data Streams With Spiking Neural Networks.基于脉冲神经网络的时空数据流高效处理
Front Neurosci. 2020 May 5;14:439. doi: 10.3389/fnins.2020.00439. eCollection 2020.
7
Neuron pruning in temporal domain for energy efficient SNN processor design.用于高能效脉冲神经网络(SNN)处理器设计的时域神经元修剪
Front Neurosci. 2023 Nov 30;17:1285914. doi: 10.3389/fnins.2023.1285914. eCollection 2023.
8
STCA-SNN: self-attention-based temporal-channel joint attention for spiking neural networks.STCA-SNN:用于脉冲神经网络的基于自注意力的时间-通道联合注意力
Front Neurosci. 2023 Nov 10;17:1261543. doi: 10.3389/fnins.2023.1261543. eCollection 2023.
9
SGLFormer: Spiking Global-Local-Fusion Transformer with high performance.SGLFormer:具有高性能的脉冲全局-局部融合变压器。
Front Neurosci. 2024 Mar 12;18:1371290. doi: 10.3389/fnins.2024.1371290. eCollection 2024.
10
LIAF-Net: Leaky Integrate and Analog Fire Network for Lightweight and Efficient Spatiotemporal Information Processing.LIAF-Net:用于轻量级和高效时空信息处理的漏积分和模拟火灾网络。
IEEE Trans Neural Netw Learn Syst. 2022 Nov;33(11):6249-6262. doi: 10.1109/TNNLS.2021.3073016. Epub 2022 Oct 27.

本文引用的文献

1
Developmental Plasticity-Inspired Adaptive Pruning for Deep Spiking and Artificial Neural Networks.受发育可塑性启发的深度脉冲神经网络和人工神经网络自适应剪枝
IEEE Trans Pattern Anal Mach Intell. 2025 Jan;47(1):240-251. doi: 10.1109/TPAMI.2024.3467268. Epub 2024 Dec 4.
2
Cognitive Control and Neural Activity during Human Development: Evidence for Synaptic Pruning.人类发育过程中的认知控制与神经活动:突触修剪的证据
J Neurosci. 2024 Jun 26;44(26):e0373242024. doi: 10.1523/JNEUROSCI.0373-24.2024.
3
Brain-state mediated modulation of inter-laminar dependencies in visual cortex.
脑状态介导的视觉皮层层间依赖关系的调制。
Nat Commun. 2024 Jun 14;15(1):5105. doi: 10.1038/s41467-024-49144-w.
4
Spike-based dynamic computing with asynchronous sensing-computing neuromorphic chip.基于尖峰的动态计算与异步传感计算神经形态芯片。
Nat Commun. 2024 May 25;15(1):4464. doi: 10.1038/s41467-024-47811-6.
5
SpikingJelly: An open-source machine learning infrastructure platform for spike-based intelligence.SpikingJelly:一个用于基于尖峰的智能的开源机器学习基础架构平台。
Sci Adv. 2023 Oct 6;9(40):eadi1480. doi: 10.1126/sciadv.adi1480.
6
Fast-SNN: Fast Spiking Neural Network by Converting Quantized ANN.快速脉冲神经网络:通过量化人工神经网络转换实现的快速脉冲神经网络
IEEE Trans Pattern Anal Mach Intell. 2023 Dec;45(12):14546-14562. doi: 10.1109/TPAMI.2023.3275769. Epub 2023 Nov 3.
7
Comprehensive SNN Compression Using ADMM Optimization and Activity Regularization.基于 ADMM 优化和活动正则化的全面 SNN 压缩。
IEEE Trans Neural Netw Learn Syst. 2023 Jun;34(6):2791-2805. doi: 10.1109/TNNLS.2021.3109064. Epub 2023 Jun 1.
8
Mechanisms governing activity-dependent synaptic pruning in the developing mammalian CNS.调控哺乳动物中枢神经系统发育过程中活性依赖型突触修剪的机制。
Nat Rev Neurosci. 2021 Nov;22(11):657-673. doi: 10.1038/s41583-021-00507-y. Epub 2021 Sep 20.
9
A brain-inspired computational model for spatio-temporal information processing.一种用于时空信息处理的类脑计算模型。
Neural Netw. 2021 Nov;143:74-87. doi: 10.1016/j.neunet.2021.05.015. Epub 2021 May 16.
10
Towards spike-based machine intelligence with neuromorphic computing.迈向基于尖峰的机器智能的神经形态计算。
Nature. 2019 Nov;575(7784):607-617. doi: 10.1038/s41586-019-1677-2. Epub 2019 Nov 27.