• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于在推理阶段降低脉冲卷积神经网络推理延迟的有效插件

Effective Plug-Ins for Reducing Inference-Latency of Spiking Convolutional Neural Networks During Inference Phase.

作者信息

Chen Xuan, Yuan Xiaopeng, Fu Gaoming, Luo Yuanyong, Yue Tao, Yan Feng, Wang Yuxuan, Pan Hongbing

机构信息

The School of Electronic Science and Engineering, Nanjing University, Nanjing, China.

出版信息

Front Comput Neurosci. 2021 Oct 18;15:697469. doi: 10.3389/fncom.2021.697469. eCollection 2021.

DOI:10.3389/fncom.2021.697469
PMID:34733147
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8558256/
Abstract

Convolutional Neural Networks (CNNs) are effective and mature in the field of classification, while Spiking Neural Networks (SNNs) are energy-saving for their sparsity of data flow and event-driven working mechanism. Previous work demonstrated that CNNs can be converted into equivalent Spiking Convolutional Neural Networks (SCNNs) without obvious accuracy loss, including different functional layers such as Convolutional (Conv), Fully Connected (FC), Avg-pooling, Max-pooling, and Batch-Normalization (BN) layers. To reduce inference-latency, existing researches mainly concentrated on the normalization of weights to increase the firing rate of neurons. There are also some approaches during training phase or altering the network architecture. However, little attention has been paid on the end of inference phase. From this new perspective, this paper presents 4 stopping criterions as low-cost plug-ins to reduce the inference-latency of SCNNs. The proposed methods are validated using MATLAB and PyTorch platforms with Spiking-AlexNet for CIFAR-10 dataset and Spiking-LeNet-5 for MNIST dataset. Simulation results reveal that, compared to the state-of-the-art methods, the proposed method can shorten the average inference-latency of Spiking-AlexNet from 892 to 267 time steps (almost 3.34 times faster) with the accuracy decline from 87.95 to 87.72%. With our methods, 4 types of Spiking-LeNet-5 only need 24-70 time steps per image with the accuracy decline not more than 0.1%, while models without our methods require 52-138 time steps, almost 1.92 to 3.21 times slower than us.

摘要

卷积神经网络(CNNs)在分类领域有效且成熟,而脉冲神经网络(SNNs)因其数据流的稀疏性和事件驱动的工作机制而节能。先前的工作表明,卷积神经网络可以转换为等效的脉冲卷积神经网络(SCNNs),且不会有明显的精度损失,包括不同的功能层,如卷积(Conv)、全连接(FC)、平均池化、最大池化和批归一化(BN)层。为了减少推理延迟,现有研究主要集中在权重归一化以提高神经元的发放率。在训练阶段也有一些方法或改变网络架构。然而,在推理阶段结束时却很少受到关注。从这个新的角度出发,本文提出了4种停止准则作为低成本插件,以减少SCNNs的推理延迟。所提出的方法在MATLAB和PyTorch平台上使用针对CIFAR-10数据集的脉冲AlexNet和针对MNIST数据集的脉冲LeNet-5进行了验证。仿真结果表明,与现有方法相比,所提出的方法可以将脉冲AlexNet的平均推理延迟从892个时间步缩短到267个时间步(速度快了近3.34倍),精度从87.95%下降到87.72%。使用我们的方法,4种类型的脉冲LeNet-5每张图像仅需24 - 70个时间步,精度下降不超过0.1%,而没有使用我们方法的模型需要52 - 138个时间步,比我们慢了近1.92到3.21倍。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a893/8558256/472cdb259d6b/fncom-15-697469-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a893/8558256/f8780142520f/fncom-15-697469-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a893/8558256/345593a1a961/fncom-15-697469-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a893/8558256/4a52d748669d/fncom-15-697469-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a893/8558256/781068bd0f33/fncom-15-697469-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a893/8558256/a9e4c0816f11/fncom-15-697469-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a893/8558256/eddb5fcf5f60/fncom-15-697469-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a893/8558256/558beeca17d1/fncom-15-697469-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a893/8558256/698e9941d7b8/fncom-15-697469-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a893/8558256/472cdb259d6b/fncom-15-697469-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a893/8558256/f8780142520f/fncom-15-697469-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a893/8558256/345593a1a961/fncom-15-697469-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a893/8558256/4a52d748669d/fncom-15-697469-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a893/8558256/781068bd0f33/fncom-15-697469-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a893/8558256/a9e4c0816f11/fncom-15-697469-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a893/8558256/eddb5fcf5f60/fncom-15-697469-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a893/8558256/558beeca17d1/fncom-15-697469-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a893/8558256/698e9941d7b8/fncom-15-697469-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a893/8558256/472cdb259d6b/fncom-15-697469-g0009.jpg

相似文献

1
Effective Plug-Ins for Reducing Inference-Latency of Spiking Convolutional Neural Networks During Inference Phase.用于在推理阶段降低脉冲卷积神经网络推理延迟的有效插件
Front Comput Neurosci. 2021 Oct 18;15:697469. doi: 10.3389/fncom.2021.697469. eCollection 2021.
2
Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification.将连续值深度网络转换为用于图像分类的高效事件驱动网络
Front Neurosci. 2017 Dec 7;11:682. doi: 10.3389/fnins.2017.00682. eCollection 2017.
3
A Scatter-and-Gather Spiking Convolutional Neural Network on a Reconfigurable Neuromorphic Hardware.一种基于可重构神经形态硬件的散射与聚集脉冲卷积神经网络。
Front Neurosci. 2021 Nov 16;15:694170. doi: 10.3389/fnins.2021.694170. eCollection 2021.
4
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training.SSTDP:用于高效脉冲神经网络训练的监督式脉冲时间依赖可塑性
Front Neurosci. 2021 Nov 4;15:756876. doi: 10.3389/fnins.2021.756876. eCollection 2021.
5
Revisiting Batch Normalization for Training Low-Latency Deep Spiking Neural Networks From Scratch.从头开始训练低延迟深度脉冲神经网络时重新审视批量归一化
Front Neurosci. 2021 Dec 9;15:773954. doi: 10.3389/fnins.2021.773954. eCollection 2021.
6
Training Deep Spiking Neural Networks Using Backpropagation.使用反向传播训练深度脉冲神经网络。
Front Neurosci. 2016 Nov 8;10:508. doi: 10.3389/fnins.2016.00508. eCollection 2016.
7
CQ Training: Minimizing Accuracy Loss in Conversion From Convolutional Neural Networks to Spiking Neural Networks.CQ训练:最小化从卷积神经网络转换到脉冲神经网络时的精度损失。
IEEE Trans Pattern Anal Mach Intell. 2023 Oct;45(10):11600-11611. doi: 10.1109/TPAMI.2023.3286121. Epub 2023 Sep 5.
8
SPIDEN: deep Spiking Neural Networks for efficient image denoising.SPIDEN:用于高效图像去噪的深度脉冲神经网络。
Front Neurosci. 2023 Aug 11;17:1224457. doi: 10.3389/fnins.2023.1224457. eCollection 2023.
9
Spiking CMOS-NVM mixed-signal neuromorphic ConvNet with circuit- and training-optimized temporal subsampling.具有电路和训练优化时间下采样的尖峰CMOS-NVM混合信号神经形态卷积网络
Front Neurosci. 2023 Jul 18;17:1177592. doi: 10.3389/fnins.2023.1177592. eCollection 2023.
10
Exploring Optimized Spiking Neural Network Architectures for Classification Tasks on Embedded Platforms.探索用于嵌入式平台上分类任务的优化尖峰神经网络架构。
Sensors (Basel). 2021 May 7;21(9):3240. doi: 10.3390/s21093240.

本文引用的文献

1
Towards spike-based machine intelligence with neuromorphic computing.迈向基于尖峰的机器智能的神经形态计算。
Nature. 2019 Nov;575(7784):607-617. doi: 10.1038/s41586-019-1677-2. Epub 2019 Nov 27.
2
MorphIC: A 65-nm 738k-Synapse/mm Quad-Core Binary-Weight Digital Neuromorphic Processor With Stochastic Spike-Driven Online Learning.MorphIC:具有随机尖峰驱动在线学习功能的 65nm 738k 突触/mm 四核二进制权数字神经形态处理器。
IEEE Trans Biomed Circuits Syst. 2019 Oct;13(5):999-1010. doi: 10.1109/TBCAS.2019.2928793. Epub 2019 Jul 15.
3
Going Deeper in Spiking Neural Networks: VGG and Residual Architectures.
深入探索脉冲神经网络:VGG和残差架构。
Front Neurosci. 2019 Mar 7;13:95. doi: 10.3389/fnins.2019.00095. eCollection 2019.
4
Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification.将连续值深度网络转换为用于图像分类的高效事件驱动网络
Front Neurosci. 2017 Dec 7;11:682. doi: 10.3389/fnins.2017.00682. eCollection 2017.
5
Visual Confidence.视觉自信
Annu Rev Vis Sci. 2016 Oct 14;2:459-481. doi: 10.1146/annurev-vision-111815-114630. Epub 2016 Aug 3.
6
Training Deep Spiking Neural Networks Using Backpropagation.使用反向传播训练深度脉冲神经网络。
Front Neurosci. 2016 Nov 8;10:508. doi: 10.3389/fnins.2016.00508. eCollection 2016.
7
Unsupervised learning of digit recognition using spike-timing-dependent plasticity.使用基于脉冲时间依赖可塑性的无监督数字识别学习。
Front Comput Neurosci. 2015 Aug 3;9:99. doi: 10.3389/fncom.2015.00099. eCollection 2015.
8
Sequential Sampling Models in Cognitive Neuroscience: Advantages, Applications, and Extensions.认知神经科学中的序贯抽样模型:优势、应用与扩展
Annu Rev Psychol. 2016;67:641-66. doi: 10.1146/annurev-psych-122414-033645. Epub 2015 Sep 17.
9
Artificial brains. A million spiking-neuron integrated circuit with a scalable communication network and interface.人工大脑。具有可扩展通信网络和接口的 100 万个尖峰神经元集成电路。
Science. 2014 Aug 8;345(6197):668-73. doi: 10.1126/science.1254642. Epub 2014 Aug 7.
10
Disentangling decision models: from independence to competition.解析决策模型:从独立到竞争。
Psychol Rev. 2013 Jan;120(1):1-38. doi: 10.1037/a0030776.