• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用支持多种脉冲编码的时间压缩提高硬件脉冲神经网络加速器的吞吐量和效率。

Boosting Throughput and Efficiency of Hardware Spiking Neural Accelerators Using Time Compression Supporting Multiple Spike Codes.

作者信息

Xu Changqing, Zhang Wenrui, Liu Yu, Li Peng

机构信息

School of Microelectronics, Xidian University, Xi'an, China.

Department of Electrical and Computer Engineering, University of California, Santa Barbara, Santa Barbara, CA, United States.

出版信息

Front Neurosci. 2020 Feb 14;14:104. doi: 10.3389/fnins.2020.00104. eCollection 2020.

DOI:10.3389/fnins.2020.00104
PMID:32140093
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7043203/
Abstract

Spiking neural networks (SNNs) are the third generation of neural networks and can explore both rate and temporal coding for energy-efficient event-driven computation. However, the decision accuracy of existing SNN designs is contingent upon processing a large number of spikes over a long period. Nevertheless, the switching power of SNN hardware accelerators is proportional to the number of spikes processed while the length of spike trains limits throughput and static power efficiency. This paper presents the first study on developing temporal compression to significantly boost throughput and reduce energy dissipation of digital hardware SNN accelerators while being applicable to multiple spike codes. The proposed compression architectures consist of low-cost input spike compression units, novel input-and-output-weighted spiking neurons, and reconfigurable time constant scaling to support large and flexible time compression ratios. Our compression architectures can be transparently applied to any given pre-designed SNNs employing either rate or temporal codes while incurring minimal modification of the neural models, learning algorithms, and hardware design. Using spiking speech and image recognition datasets, we demonstrate the feasibility of supporting large time compression ratios of up to 16×, delivering up to 15.93×, 13.88×, and 86.21× improvements in throughput, energy dissipation, the tradeoffs between hardware area, runtime, energy, and classification accuracy, respectively based on different spike codes on a Xilinx Zynq-7000 FPGA. These results are achieved while incurring little extra hardware overhead.

摘要

脉冲神经网络(SNN)是第三代神经网络,能够探索速率编码和时间编码,以实现高能效的事件驱动计算。然而,现有SNN设计的决策准确性取决于在较长时间内处理大量脉冲。尽管如此,SNN硬件加速器的开关功率与处理的脉冲数量成正比,而脉冲序列的长度限制了吞吐量和静态功率效率。本文首次开展了关于开发时间压缩技术的研究,以显著提高数字硬件SNN加速器的吞吐量并降低能耗,同时适用于多种脉冲编码。所提出的压缩架构包括低成本的输入脉冲压缩单元、新颖的输入和输出加权脉冲神经元,以及可重构的时间常数缩放,以支持大且灵活的时间压缩比。我们的压缩架构可以透明地应用于任何采用速率编码或时间编码的预先设计好的SNN,同时对神经模型、学习算法和硬件设计的修改最小。使用脉冲语音和图像识别数据集,我们证明了支持高达16倍的大时间压缩比的可行性,基于Xilinx Zynq-7000 FPGA上的不同脉冲编码,分别在吞吐量、能耗、硬件面积、运行时间、能量和分类准确性之间的权衡方面实现了高达15.93倍、13.88倍和86.21倍的提升。这些结果是在几乎不产生额外硬件开销的情况下实现的。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d2c7/7043203/91908dcc8baf/fnins-14-00104-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d2c7/7043203/f4c842835a75/fnins-14-00104-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d2c7/7043203/aa84d9c014e7/fnins-14-00104-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d2c7/7043203/41b6ea0377ee/fnins-14-00104-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d2c7/7043203/306b2b387766/fnins-14-00104-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d2c7/7043203/43bf0ad74818/fnins-14-00104-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d2c7/7043203/5c9f0d6b6a52/fnins-14-00104-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d2c7/7043203/4a51d0e45fe4/fnins-14-00104-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d2c7/7043203/032d90b206d0/fnins-14-00104-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d2c7/7043203/8c34bc4ff64f/fnins-14-00104-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d2c7/7043203/91908dcc8baf/fnins-14-00104-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d2c7/7043203/f4c842835a75/fnins-14-00104-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d2c7/7043203/aa84d9c014e7/fnins-14-00104-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d2c7/7043203/41b6ea0377ee/fnins-14-00104-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d2c7/7043203/306b2b387766/fnins-14-00104-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d2c7/7043203/43bf0ad74818/fnins-14-00104-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d2c7/7043203/5c9f0d6b6a52/fnins-14-00104-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d2c7/7043203/4a51d0e45fe4/fnins-14-00104-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d2c7/7043203/032d90b206d0/fnins-14-00104-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d2c7/7043203/8c34bc4ff64f/fnins-14-00104-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d2c7/7043203/91908dcc8baf/fnins-14-00104-g0010.jpg

相似文献

1
Boosting Throughput and Efficiency of Hardware Spiking Neural Accelerators Using Time Compression Supporting Multiple Spike Codes.使用支持多种脉冲编码的时间压缩提高硬件脉冲神经网络加速器的吞吐量和效率。
Front Neurosci. 2020 Feb 14;14:104. doi: 10.3389/fnins.2020.00104. eCollection 2020.
2
Spike-Train Level Direct Feedback Alignment: Sidestepping Backpropagation for On-Chip Training of Spiking Neural Nets.尖峰序列水平直接反馈对齐:用于脉冲神经网络片上训练的避开反向传播方法
Front Neurosci. 2020 Mar 13;14:143. doi: 10.3389/fnins.2020.00143. eCollection 2020.
3
An FPGA implementation of Bayesian inference with spiking neural networks.基于脉冲神经网络的贝叶斯推理的现场可编程门阵列实现。
Front Neurosci. 2024 Jan 5;17:1291051. doi: 10.3389/fnins.2023.1291051. eCollection 2023.
4
An FPGA Implementation of Deep Spiking Neural Networks for Low-Power and Fast Classification.深尖峰神经网络在低功耗和快速分类中的 FPGA 实现。
Neural Comput. 2020 Jan;32(1):182-204. doi: 10.1162/neco_a_01245. Epub 2019 Nov 8.
5
A TTFS-based energy and utilization efficient neuromorphic CNN accelerator.一种基于时间到第一个尖峰(TTFS)的能量与利用率高效的神经形态卷积神经网络加速器。
Front Neurosci. 2023 May 5;17:1121592. doi: 10.3389/fnins.2023.1121592. eCollection 2023.
6
SpQuant-SNN: ultra-low precision membrane potential with sparse activations unlock the potential of on-device spiking neural networks applications.SpQuant-SNN:具有稀疏激活的超低精度膜电位开启了片上脉冲神经网络应用的潜力。
Front Neurosci. 2024 Sep 4;18:1440000. doi: 10.3389/fnins.2024.1440000. eCollection 2024.
7
Probabilistic Spike Propagation for Efficient Hardware Implementation of Spiking Neural Networks.用于脉冲神经网络高效硬件实现的概率脉冲传播
Front Neurosci. 2021 Jul 15;15:694402. doi: 10.3389/fnins.2021.694402. eCollection 2021.
8
Neural Coding in Spiking Neural Networks: A Comparative Study for Robust Neuromorphic Systems.脉冲神经网络中的神经编码:对鲁棒神经形态系统的比较研究
Front Neurosci. 2021 Mar 4;15:638474. doi: 10.3389/fnins.2021.638474. eCollection 2021.
9
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training.SSTDP:用于高效脉冲神经网络训练的监督式脉冲时间依赖可塑性
Front Neurosci. 2021 Nov 4;15:756876. doi: 10.3389/fnins.2021.756876. eCollection 2021.
10
Real-time execution of SNN models with synaptic plasticity for handwritten digit recognition on SIMD hardware.基于突触可塑性的SNN模型在SIMD硬件上进行手写数字识别的实时执行。
Front Neurosci. 2024 Aug 6;18:1425861. doi: 10.3389/fnins.2024.1425861. eCollection 2024.

引用本文的文献

1
ALBSNN: ultra-low latency adaptive local binary spiking neural network with accuracy loss estimator.ALBSNN:具有精度损失估计器的超低延迟自适应局部二值脉冲神经网络
Front Neurosci. 2023 Sep 13;17:1225871. doi: 10.3389/fnins.2023.1225871. eCollection 2023.
2
MAP-SNN: Mapping spike activities with multiplicity, adaptability, and plasticity into bio-plausible spiking neural networks.MAP-SNN:将具有多样性、适应性和可塑性的脉冲活动映射到具有生物合理性的脉冲神经网络中。
Front Neurosci. 2022 Sep 20;16:945037. doi: 10.3389/fnins.2022.945037. eCollection 2022.

本文引用的文献

1
Spike-Train Level Direct Feedback Alignment: Sidestepping Backpropagation for On-Chip Training of Spiking Neural Nets.尖峰序列水平直接反馈对齐:用于脉冲神经网络片上训练的避开反向传播方法
Front Neurosci. 2020 Mar 13;14:143. doi: 10.3389/fnins.2020.00143. eCollection 2020.
2
Sparse Computation in Adaptive Spiking Neural Networks.自适应脉冲神经网络中的稀疏计算
Front Neurosci. 2019 Jan 8;12:987. doi: 10.3389/fnins.2018.00987. eCollection 2018.
3
Feature Representations for Neuromorphic Audio Spike Streams.神经形态音频脉冲流的特征表示
Front Neurosci. 2018 Feb 9;12:23. doi: 10.3389/fnins.2018.00023. eCollection 2018.
4
A Digital Liquid State Machine With Biologically Inspired Learning and Its Application to Speech Recognition.一种具有生物启发式学习的数字液体状态机及其在语音识别中的应用。
IEEE Trans Neural Netw Learn Syst. 2015 Nov;26(11):2635-49. doi: 10.1109/TNNLS.2015.2388544. Epub 2015 Jan 27.
5
Artificial brains. A million spiking-neuron integrated circuit with a scalable communication network and interface.人工大脑。具有可扩展通信网络和接口的 100 万个尖峰神经元集成电路。
Science. 2014 Aug 8;345(6197):668-73. doi: 10.1126/science.1254642. Epub 2014 Aug 7.
6
Introduction to spiking neural networks: Information processing, learning and applications.脉冲神经网络简介:信息处理、学习与应用
Acta Neurobiol Exp (Wars). 2011;71(4):409-33. doi: 10.55782/ane-2011-1862.
7
Spike-phase coding boosts and stabilizes information carried by spatial and temporal spike patterns.峰值相位编码增强并稳定了由空间和时间峰值模式携带的信息。
Neuron. 2009 Feb 26;61(4):597-608. doi: 10.1016/j.neuron.2009.01.008.
8
Resonance and selective communication via bursts in neurons having subthreshold oscillations.通过具有阈下振荡的神经元中的脉冲实现共振和选择性通信。
Biosystems. 2002 Oct-Dec;67(1-3):95-102. doi: 10.1016/s0303-2647(02)00067-9.
9
Real-time computing without stable states: a new framework for neural computation based on perturbations.无稳定状态的实时计算:基于扰动的神经计算新框架。
Neural Comput. 2002 Nov;14(11):2531-60. doi: 10.1162/089976602760407955.
10
Spike-based strategies for rapid processing.基于尖峰的快速处理策略。
Neural Netw. 2001 Jul-Sep;14(6-7):715-25. doi: 10.1016/s0893-6080(01)00083-1.