• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

探索二进制神经网络与脉冲神经网络之间的联系。

Exploring the Connection Between Binary and Spiking Neural Networks.

作者信息

Lu Sen, Sengupta Abhronil

机构信息

School of Electrical Engineering and Computer Science, The Pennsylvania State University, University Park, PA, United States.

出版信息

Front Neurosci. 2020 Jun 24;14:535. doi: 10.3389/fnins.2020.00535. eCollection 2020.

DOI:10.3389/fnins.2020.00535
PMID:32670002
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7327094/
Abstract

On-chip edge intelligence has necessitated the exploration of algorithmic techniques to reduce the compute requirements of current machine learning frameworks. This work aims to bridge the recent algorithmic progress in training Binary Neural Networks and Spiking Neural Networks-both of which are driven by the same motivation and yet synergies between the two have not been fully explored. We show that training Spiking Neural Networks in the extreme quantization regime results in near full precision accuracies on large-scale datasets like CIFAR-100 and ImageNet. An important implication of this work is that Binary Spiking Neural Networks can be enabled by "In-Memory" hardware accelerators catered for Binary Neural Networks without suffering any accuracy degradation due to binarization. We utilize standard training techniques for non-spiking networks to generate our spiking networks by conversion process and also perform an extensive empirical analysis and explore simple design-time and run-time optimization techniques for reducing inference latency of spiking networks (both for binary and full-precision models) by an order of magnitude over prior work. Our implementation source code and trained models are available at https://github.com/NeuroCompLab-psu/SNN-Conversion.

摘要

片上边缘智能促使人们探索算法技术,以降低当前机器学习框架的计算需求。这项工作旨在弥合近期在训练二进制神经网络和脉冲神经网络方面的算法进展,这两者都出于相同的动机,但尚未充分探索两者之间的协同作用。我们表明,在极端量化模式下训练脉冲神经网络,在CIFAR-100和ImageNet等大规模数据集上可实现接近全精度的准确率。这项工作的一个重要意义在于,“内存中”的硬件加速器可支持二进制脉冲神经网络,该加速器专为二进制神经网络设计,不会因二值化而导致任何精度下降。我们利用非脉冲网络的标准训练技术,通过转换过程生成脉冲网络,并进行了广泛的实证分析,探索了简单的设计时和运行时优化技术,以将脉冲网络(包括二进制和全精度模型)的推理延迟比先前工作降低一个数量级。我们的实现源代码和训练模型可在https://github.com/NeuroCompLab-psu/SNN-Conversion获取。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a69/7327094/612107af1e17/fnins-14-00535-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a69/7327094/d9ee0b2e92c8/fnins-14-00535-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a69/7327094/367a8fa5e150/fnins-14-00535-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a69/7327094/62b92db00f57/fnins-14-00535-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a69/7327094/7324f246b33d/fnins-14-00535-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a69/7327094/30eef56e682c/fnins-14-00535-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a69/7327094/2e10c699f267/fnins-14-00535-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a69/7327094/50fd263a3aef/fnins-14-00535-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a69/7327094/60200fc0c4b8/fnins-14-00535-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a69/7327094/cea3e99d2fc2/fnins-14-00535-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a69/7327094/50b03a0a3463/fnins-14-00535-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a69/7327094/ce6144b44f64/fnins-14-00535-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a69/7327094/612107af1e17/fnins-14-00535-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a69/7327094/d9ee0b2e92c8/fnins-14-00535-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a69/7327094/367a8fa5e150/fnins-14-00535-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a69/7327094/62b92db00f57/fnins-14-00535-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a69/7327094/7324f246b33d/fnins-14-00535-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a69/7327094/30eef56e682c/fnins-14-00535-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a69/7327094/2e10c699f267/fnins-14-00535-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a69/7327094/50fd263a3aef/fnins-14-00535-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a69/7327094/60200fc0c4b8/fnins-14-00535-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a69/7327094/cea3e99d2fc2/fnins-14-00535-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a69/7327094/50b03a0a3463/fnins-14-00535-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a69/7327094/ce6144b44f64/fnins-14-00535-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a69/7327094/612107af1e17/fnins-14-00535-g0012.jpg

相似文献

1
Exploring the Connection Between Binary and Spiking Neural Networks.探索二进制神经网络与脉冲神经网络之间的联系。
Front Neurosci. 2020 Jun 24;14:535. doi: 10.3389/fnins.2020.00535. eCollection 2020.
2
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training.SSTDP:用于高效脉冲神经网络训练的监督式脉冲时间依赖可塑性
Front Neurosci. 2021 Nov 4;15:756876. doi: 10.3389/fnins.2021.756876. eCollection 2021.
3
Neuroevolution Guided Hybrid Spiking Neural Network Training.神经进化引导的混合脉冲神经网络训练
Front Neurosci. 2022 Apr 25;16:838523. doi: 10.3389/fnins.2022.838523. eCollection 2022.
4
ALBSNN: ultra-low latency adaptive local binary spiking neural network with accuracy loss estimator.ALBSNN:具有精度损失估计器的超低延迟自适应局部二值脉冲神经网络
Front Neurosci. 2023 Sep 13;17:1225871. doi: 10.3389/fnins.2023.1225871. eCollection 2023.
5
Quantization Framework for Fast Spiking Neural Networks.快速脉冲神经网络的量化框架
Front Neurosci. 2022 Jul 19;16:918793. doi: 10.3389/fnins.2022.918793. eCollection 2022.
6
Going Deeper in Spiking Neural Networks: VGG and Residual Architectures.深入探索脉冲神经网络:VGG和残差架构。
Front Neurosci. 2019 Mar 7;13:95. doi: 10.3389/fnins.2019.00095. eCollection 2019.
7
Trainable quantization for Speedy Spiking Neural Networks.用于快速脉冲神经网络的可训练量化
Front Neurosci. 2023 Mar 3;17:1154241. doi: 10.3389/fnins.2023.1154241. eCollection 2023.
8
SpQuant-SNN: ultra-low precision membrane potential with sparse activations unlock the potential of on-device spiking neural networks applications.SpQuant-SNN:具有稀疏激活的超低精度膜电位开启了片上脉冲神经网络应用的潜力。
Front Neurosci. 2024 Sep 4;18:1440000. doi: 10.3389/fnins.2024.1440000. eCollection 2024.
9
BSNN: Towards faster and better conversion of artificial neural networks to spiking neural networks with bistable neurons.BSNN:利用双稳态神经元实现人工神经网络向脉冲神经网络更快、更好的转换
Front Neurosci. 2022 Oct 12;16:991851. doi: 10.3389/fnins.2022.991851. eCollection 2022.
10
A Little Energy Goes a Long Way: Build an Energy-Efficient, Accurate Spiking Neural Network From Convolutional Neural Network.一点能量发挥大作用:从卷积神经网络构建节能且准确的脉冲神经网络。
Front Neurosci. 2022 May 26;16:759900. doi: 10.3389/fnins.2022.759900. eCollection 2022.

引用本文的文献

1
An accurate and fast learning approach in the biologically spiking neural network.生物脉冲神经网络中一种准确且快速的学习方法。
Sci Rep. 2025 Feb 24;15(1):6585. doi: 10.1038/s41598-025-90113-0.
2
ALBSNN: ultra-low latency adaptive local binary spiking neural network with accuracy loss estimator.ALBSNN:具有精度损失估计器的超低延迟自适应局部二值脉冲神经网络
Front Neurosci. 2023 Sep 13;17:1225871. doi: 10.3389/fnins.2023.1225871. eCollection 2023.
3
Quantization Framework for Fast Spiking Neural Networks.快速脉冲神经网络的量化框架

本文引用的文献

1
Rethinking the performance comparison between SNNS and ANNS.重新思考 SNNS 和 ANNS 的性能比较。
Neural Netw. 2020 Jan;121:294-307. doi: 10.1016/j.neunet.2019.09.005. Epub 2019 Sep 19.
2
ReStoCNet: Residual Stochastic Binary Convolutional Spiking Neural Network for Memory-Efficient Neuromorphic Computing.ReStoCNet:用于高效内存神经形态计算的残差随机二值卷积脉冲神经网络
Front Neurosci. 2019 Mar 19;13:189. doi: 10.3389/fnins.2019.00189. eCollection 2019.
3
Going Deeper in Spiking Neural Networks: VGG and Residual Architectures.
Front Neurosci. 2022 Jul 19;16:918793. doi: 10.3389/fnins.2022.918793. eCollection 2022.
4
A Little Energy Goes a Long Way: Build an Energy-Efficient, Accurate Spiking Neural Network From Convolutional Neural Network.一点能量发挥大作用:从卷积神经网络构建节能且准确的脉冲神经网络。
Front Neurosci. 2022 May 26;16:759900. doi: 10.3389/fnins.2022.759900. eCollection 2022.
5
Neuroevolution Guided Hybrid Spiking Neural Network Training.神经进化引导的混合脉冲神经网络训练
Front Neurosci. 2022 Apr 25;16:838523. doi: 10.3389/fnins.2022.838523. eCollection 2022.
6
Reconfigurable halide perovskite nanocrystal memristors for neuromorphic computing.用于神经形态计算的可重构卤化物钙钛矿纳米晶体忆阻器
Nat Commun. 2022 Apr 19;13(1):2074. doi: 10.1038/s41467-022-29727-1.
7
A Systematic Literature Review on Distributed Machine Learning in Edge Computing.分布式机器学习在边缘计算中的系统文献综述
Sensors (Basel). 2022 Mar 30;22(7):2665. doi: 10.3390/s22072665.
8
Dynamical Characteristics of Recurrent Neuronal Networks Are Robust Against Low Synaptic Weight Resolution.递归神经网络的动力学特性对低突触权重分辨率具有鲁棒性。
Front Neurosci. 2021 Dec 24;15:757790. doi: 10.3389/fnins.2021.757790. eCollection 2021.
9
On the Self-Repair Role of Astrocytes in STDP Enabled Unsupervised SNNs.关于星形胶质细胞在基于STDP的无监督脉冲神经网络中的自我修复作用
Front Neurosci. 2021 Jan 14;14:603796. doi: 10.3389/fnins.2020.603796. eCollection 2020.
10
Toward Scalable, Efficient, and Accurate Deep Spiking Neural Networks With Backward Residual Connections, Stochastic Softmax, and Hybridization.迈向具有反向残差连接、随机softmax和混合化的可扩展、高效且准确的深度脉冲神经网络。
Front Neurosci. 2020 Jun 30;14:653. doi: 10.3389/fnins.2020.00653. eCollection 2020.
深入探索脉冲神经网络:VGG和残差架构。
Front Neurosci. 2019 Mar 7;13:95. doi: 10.3389/fnins.2019.00095. eCollection 2019.
4
BindsNET: A Machine Learning-Oriented Spiking Neural Networks Library in Python.BindsNET:一个面向机器学习的Python脉冲神经网络库。
Front Neuroinform. 2018 Dec 12;12:89. doi: 10.3389/fninf.2018.00089. eCollection 2018.
5
Deep Learning With Spiking Neurons: Opportunities and Challenges.基于脉冲神经元的深度学习:机遇与挑战。
Front Neurosci. 2018 Oct 25;12:774. doi: 10.3389/fnins.2018.00774. eCollection 2018.
6
GXNOR-Net: Training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework.GXNOR-Net:在统一的离散化框架下使用三进制权重和激活函数训练深度神经网络,无需全精度存储。
Neural Netw. 2018 Apr;100:49-58. doi: 10.1016/j.neunet.2018.01.010. Epub 2018 Feb 2.
7
Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification.将连续值深度网络转换为用于图像分类的高效事件驱动网络
Front Neurosci. 2017 Dec 7;11:682. doi: 10.3389/fnins.2017.00682. eCollection 2017.
8
CIFAR10-DVS: An Event-Stream Dataset for Object Classification.CIFAR10-DVS:用于目标分类的事件流数据集。
Front Neurosci. 2017 May 30;11:309. doi: 10.3389/fnins.2017.00309. eCollection 2017.
9
Training Deep Spiking Neural Networks Using Backpropagation.使用反向传播训练深度脉冲神经网络。
Front Neurosci. 2016 Nov 8;10:508. doi: 10.3389/fnins.2016.00508. eCollection 2016.
10
Proposal for an All-Spin Artificial Neural Network: Emulating Neural and Synaptic Functionalities Through Domain Wall Motion in Ferromagnets.全自旋人工神经网络的提案:通过铁磁体中的畴壁运动来模拟神经和突触功能。
IEEE Trans Biomed Circuits Syst. 2016 Dec;10(6):1152-1160. doi: 10.1109/TBCAS.2016.2525823. Epub 2016 May 18.