• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于脉冲神经网络的时空数据流高效处理

Efficient Processing of Spatio-Temporal Data Streams With Spiking Neural Networks.

作者信息

Kugele Alexander, Pfeil Thomas, Pfeiffer Michael, Chicca Elisabetta

机构信息

Faculty of Technology and Center of Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany.

Bosch Center for Artificial Intelligence, Renningen, Germany.

出版信息

Front Neurosci. 2020 May 5;14:439. doi: 10.3389/fnins.2020.00439. eCollection 2020.

DOI:10.3389/fnins.2020.00439
PMID:32431592
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7214871/
Abstract

Spiking neural networks (SNNs) are potentially highly efficient models for inference on fully parallel neuromorphic hardware, but existing training methods that convert conventional artificial neural networks (ANNs) into SNNs are unable to exploit these advantages. Although ANN-to-SNN conversion has achieved state-of-the-art accuracy for static image classification tasks, the following subtle but important difference in the way SNNs and ANNs integrate information over time makes the direct application of conversion techniques for sequence processing tasks challenging. Whereas all connections in SNNs have a certain propagation delay larger than zero, ANNs assign different roles to feed-forward connections, which immediately update all neurons within the same time step, and recurrent connections, which have to be rolled out in time and are typically assigned a delay of one time step. Here, we present a novel method to obtain highly accurate SNNs for sequence processing by modifying the ANN training before conversion, such that delays induced by ANN rollouts match the propagation delays in the targeted SNN implementation. Our method builds on the recently introduced framework of streaming rollouts, which aims for fully parallel model execution of ANNs and inherently allows for temporal integration by merging paths of different delays between input and output of the network. The resulting networks achieve state-of-the-art accuracy for multiple event-based benchmark datasets, including N-MNIST, CIFAR10-DVS, N-CARS, and DvsGesture, and through the use of spatio-temporal shortcut connections yield low-latency approximate network responses that improve over time as more of the input sequence is processed. In addition, our converted SNNs are consistently more energy-efficient than their corresponding ANNs.

摘要

脉冲神经网络(SNN)在完全并行的神经形态硬件上进行推理时可能是高效的模型,但现有的将传统人工神经网络(ANN)转换为SNN的训练方法无法利用这些优势。尽管ANN到SNN的转换在静态图像分类任务中已达到了当前的先进精度,但SNN和ANN在信息随时间整合方式上存在以下细微但重要的差异,这使得直接将转换技术应用于序列处理任务具有挑战性。在SNN中,所有连接都有大于零的特定传播延迟,而ANN则将不同的角色分配给前馈连接(在同一时间步立即更新所有神经元)和循环连接(必须及时展开,通常分配一个时间步的延迟)。在此,我们提出一种新颖的方法,通过在转换前修改ANN训练来获得用于序列处理的高精度SNN,使得ANN展开引起的延迟与目标SNN实现中的传播延迟相匹配。我们的方法基于最近引入的流展开框架,该框架旨在实现ANN的完全并行模型执行,并通过合并网络输入和输出之间不同延迟的路径固有地允许时间整合。由此产生的网络在多个基于事件的基准数据集(包括N-MNIST、CIFAR10-DVS、N-CARS和DvsGesture)上达到了当前的先进精度,并且通过使用时空捷径连接产生低延迟的近似网络响应,并随着更多输入序列被处理而随时间改进。此外,我们转换后的SNN始终比其对应的ANN更节能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83e1/7214871/8baeb147a0a9/fnins-14-00439-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83e1/7214871/15d671f6415b/fnins-14-00439-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83e1/7214871/e324a36590b5/fnins-14-00439-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83e1/7214871/8baeb147a0a9/fnins-14-00439-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83e1/7214871/15d671f6415b/fnins-14-00439-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83e1/7214871/e324a36590b5/fnins-14-00439-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83e1/7214871/8baeb147a0a9/fnins-14-00439-g0003.jpg

相似文献

1
Efficient Processing of Spatio-Temporal Data Streams With Spiking Neural Networks.基于脉冲神经网络的时空数据流高效处理
Front Neurosci. 2020 May 5;14:439. doi: 10.3389/fnins.2020.00439. eCollection 2020.
2
STCA-SNN: self-attention-based temporal-channel joint attention for spiking neural networks.STCA-SNN:用于脉冲神经网络的基于自注意力的时间-通道联合注意力
Front Neurosci. 2023 Nov 10;17:1261543. doi: 10.3389/fnins.2023.1261543. eCollection 2023.
3
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training.SSTDP:用于高效脉冲神经网络训练的监督式脉冲时间依赖可塑性
Front Neurosci. 2021 Nov 4;15:756876. doi: 10.3389/fnins.2021.756876. eCollection 2021.
4
A universal ANN-to-SNN framework for achieving high accuracy and low latency deep Spiking Neural Networks.一种通用的 ANN-to-SNN 框架,可实现高精度和低延迟的深度尖峰神经网络。
Neural Netw. 2024 Jun;174:106244. doi: 10.1016/j.neunet.2024.106244. Epub 2024 Mar 15.
5
Rethinking the performance comparison between SNNS and ANNS.重新思考 SNNS 和 ANNS 的性能比较。
Neural Netw. 2020 Jan;121:294-307. doi: 10.1016/j.neunet.2019.09.005. Epub 2019 Sep 19.
6
High-accuracy deep ANN-to-SNN conversion using quantization-aware training framework and calcium-gated bipolar leaky integrate and fire neuron.使用量化感知训练框架和钙门控双极泄漏积分发放神经元实现高精度深度人工神经网络到脉冲神经网络的转换。
Front Neurosci. 2023 Mar 8;17:1141701. doi: 10.3389/fnins.2023.1141701. eCollection 2023.
7
A TTFS-based energy and utilization efficient neuromorphic CNN accelerator.一种基于时间到第一个尖峰(TTFS)的能量与利用率高效的神经形态卷积神经网络加速器。
Front Neurosci. 2023 May 5;17:1121592. doi: 10.3389/fnins.2023.1121592. eCollection 2023.
8
STSC-SNN: Spatio-Temporal Synaptic Connection with temporal convolution and attention for spiking neural networks.STSC-SNN:用于脉冲神经网络的具有时间卷积和注意力机制的时空突触连接
Front Neurosci. 2022 Dec 23;16:1079357. doi: 10.3389/fnins.2022.1079357. eCollection 2022.
9
Quantization Framework for Fast Spiking Neural Networks.快速脉冲神经网络的量化框架
Front Neurosci. 2022 Jul 19;16:918793. doi: 10.3389/fnins.2022.918793. eCollection 2022.
10
Optimizing Deeper Spiking Neural Networks for Dynamic Vision Sensing.深度尖峰神经网络在动态视觉传感中的优化。
Neural Netw. 2021 Dec;144:686-698. doi: 10.1016/j.neunet.2021.09.022. Epub 2021 Oct 5.

引用本文的文献

1
Fourier or Wavelet bases as counterpart self-attention in spikformer for efficient visual classification.傅里叶或小波基作为Spikformer中对应自注意力机制用于高效视觉分类。
Front Neurosci. 2025 Jan 29;18:1516868. doi: 10.3389/fnins.2024.1516868. eCollection 2024.
2
Benchmarking the speed-accuracy tradeoff in object recognition by humans and neural networks.比较人类和神经网络在目标识别中速度与准确性的权衡
J Vis. 2025 Jan 2;25(1):4. doi: 10.1167/jov.25.1.4.
3
Auto-Spikformer: Spikformer architecture search.自动Spikformer:Spikformer架构搜索

本文引用的文献

1
Event-Based Vision: A Survey.基于事件的视觉:综述。
IEEE Trans Pattern Anal Mach Intell. 2022 Jan;44(1):154-180. doi: 10.1109/TPAMI.2020.3008413. Epub 2021 Dec 7.
2
Rethinking the performance comparison between SNNS and ANNS.重新思考 SNNS 和 ANNS 的性能比较。
Neural Netw. 2020 Jan;121:294-307. doi: 10.1016/j.neunet.2019.09.005. Epub 2019 Sep 19.
3
Large-Scale Neuromorphic Spiking Array Processors: A Quest to Mimic the Brain.大规模神经形态脉冲阵列处理器:对模仿大脑的探索。
Front Neurosci. 2024 Jul 23;18:1372257. doi: 10.3389/fnins.2024.1372257. eCollection 2024.
4
STCA-SNN: self-attention-based temporal-channel joint attention for spiking neural networks.STCA-SNN:用于脉冲神经网络的基于自注意力的时间-通道联合注意力
Front Neurosci. 2023 Nov 10;17:1261543. doi: 10.3389/fnins.2023.1261543. eCollection 2023.
5
BIDL: a brain-inspired deep learning framework for spatiotemporal processing.BIDL:一种用于时空处理的受大脑启发的深度学习框架。
Front Neurosci. 2023 Jul 26;17:1213720. doi: 10.3389/fnins.2023.1213720. eCollection 2023.
6
STSC-SNN: Spatio-Temporal Synaptic Connection with temporal convolution and attention for spiking neural networks.STSC-SNN:用于脉冲神经网络的具有时间卷积和注意力机制的时空突触连接
Front Neurosci. 2022 Dec 23;16:1079357. doi: 10.3389/fnins.2022.1079357. eCollection 2022.
7
The spike gating flow: A hierarchical structure-based spiking neural network for online gesture recognition.尖峰门控流:一种用于在线手势识别的基于层次结构的脉冲神经网络。
Front Neurosci. 2022 Nov 2;16:923587. doi: 10.3389/fnins.2022.923587. eCollection 2022.
8
A Little Energy Goes a Long Way: Build an Energy-Efficient, Accurate Spiking Neural Network From Convolutional Neural Network.一点能量发挥大作用:从卷积神经网络构建节能且准确的脉冲神经网络。
Front Neurosci. 2022 May 26;16:759900. doi: 10.3389/fnins.2022.759900. eCollection 2022.
Front Neurosci. 2018 Dec 3;12:891. doi: 10.3389/fnins.2018.00891. eCollection 2018.
4
Deep Learning With Spiking Neurons: Opportunities and Challenges.基于脉冲神经元的深度学习:机遇与挑战。
Front Neurosci. 2018 Oct 25;12:774. doi: 10.3389/fnins.2018.00774. eCollection 2018.
5
Spatio-Temporal Backpropagation for Training High-Performance Spiking Neural Networks.用于训练高性能脉冲神经网络的时空反向传播
Front Neurosci. 2018 May 23;12:331. doi: 10.3389/fnins.2018.00331. eCollection 2018.
6
A Sparse-View CT Reconstruction Method Based on Combination of DenseNet and Deconvolution.基于 DenseNet 和去卷积组合的稀疏视图 CT 重建方法。
IEEE Trans Med Imaging. 2018 Jun;37(6):1407-1417. doi: 10.1109/TMI.2018.2823338.
7
Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification.将连续值深度网络转换为用于图像分类的高效事件驱动网络
Front Neurosci. 2017 Dec 7;11:682. doi: 10.3389/fnins.2017.00682. eCollection 2017.
8
CIFAR10-DVS: An Event-Stream Dataset for Object Classification.CIFAR10-DVS:用于目标分类的事件流数据集。
Front Neurosci. 2017 May 30;11:309. doi: 10.3389/fnins.2017.00309. eCollection 2017.
9
A Motion-Based Feature for Event-Based Pattern Recognition.一种用于基于事件的模式识别的基于运动的特征。
Front Neurosci. 2017 Jan 4;10:594. doi: 10.3389/fnins.2016.00594. eCollection 2016.
10
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.基于事件的神经形态立体视觉系统的三维感知尖峰神经网络模型。
Sci Rep. 2017 Jan 12;7:40703. doi: 10.1038/srep40703.