• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于 Spike 计数的低复杂度 SNN 架构与二进制突触。

Spike Counts Based Low Complexity SNN Architecture With Binary Synapse.

出版信息

IEEE Trans Biomed Circuits Syst. 2019 Dec;13(6):1664-1677. doi: 10.1109/TBCAS.2019.2945406. Epub 2019 Oct 4.

DOI:10.1109/TBCAS.2019.2945406
PMID:31603797
Abstract

In this paper, we present an energy and area efficient spike neural network (SNN) processor based on novel spike counts based methods. For the low cost SNN design, we propose hardware-friendly complexity reduction techniques for both of learning and inferencing modes of operations. First, for the unsupervised learning process, we propose a spike counts based learning method. The novel learning approach utilizes pre- and post-synaptic spike counts to reduce the bit-width of synaptic weights as well as the number of weight updates. For the energy efficient inferencing operations, we propose an accumulation based computing scheme, where the number of input spikes for each input axon is accumulated without instant membrane updates until the pre-defined number of spikes are reached. In addition, the computation skip schemes identify meaningless computations and skip them to improve energy efficiency. Based on the proposed low complexity design techniques, we design and implement the SNN processor using 65 nm CMOS process. According to the implementation results, the SNN processor achieves 87.4% of recognition accuracy in MNIST dataset using only 1-bit 230 k synaptic weights with 400 excitatory neurons. The energy consumptions are 0.26 pJ/SOP and 0.31 μJ/inference in inferencing mode, and 1.42 pJ/SOP and 2.63 μJ/learning in learning mode of operations.

摘要

在本文中,我们提出了一种基于新型基于尖峰计数的方法的节能和面积有效的尖峰神经网络(SNN)处理器。对于低成本的 SNN 设计,我们为学习和推断操作的两种模式提出了硬件友好的复杂度降低技术。首先,对于无监督学习过程,我们提出了一种基于尖峰计数的学习方法。这种新颖的学习方法利用前突触和后突触的尖峰计数来减少突触权重的位宽以及权重更新的数量。对于节能的推断操作,我们提出了一种基于累积的计算方案,其中每个输入轴突的输入尖峰数在达到预定义的尖峰数之前累积,而不会立即更新膜电位。此外,计算跳过方案识别无意义的计算并跳过它们以提高能源效率。基于所提出的低复杂度设计技术,我们使用 65nmCMOS 工艺设计和实现了 SNN 处理器。根据实现结果,SNN 处理器在 MNIST 数据集上仅使用 1 位 230k 突触权重和 400 个兴奋性神经元即可实现 87.4%的识别准确率。在推断模式下,能量消耗为 0.26pJ/SOP,在推断模式下为 0.31μJ/推断,在学习模式下为 1.42pJ/SOP,在学习模式下为 2.63μJ/学习。

相似文献

1
Spike Counts Based Low Complexity SNN Architecture With Binary Synapse.基于 Spike 计数的低复杂度 SNN 架构与二进制突触。
IEEE Trans Biomed Circuits Syst. 2019 Dec;13(6):1664-1677. doi: 10.1109/TBCAS.2019.2945406. Epub 2019 Oct 4.
2
An Energy-Quality Scalable STDP Based Sparse Coding Processor With On-Chip Learning Capability.一种具有片上学习能力的能量-质量可扩展的 STDP 基稀疏编码处理器。
IEEE Trans Biomed Circuits Syst. 2020 Feb;14(1):125-137. doi: 10.1109/TBCAS.2019.2963676. Epub 2020 Jan 3.
3
A 510 μW 0.738-mm 6.2-pJ/SOP Online Learning Multi-Topology SNN Processor With Unified Computation Engine in 40-nm CMOS.一款 510μW、0.738mm²、6.2pJ/SOP 的 40nm CMOS 在线学习多拓扑结构 SNN 处理器,具有统一的计算引擎。
IEEE Trans Biomed Circuits Syst. 2023 Jun;17(3):507-520. doi: 10.1109/TBCAS.2023.3279367. Epub 2023 Jul 12.
4
An Area- and Energy-Efficient Spiking Neural Network With Spike-Time-Dependent Plasticity Realized With SRAM Processing-in-Memory Macro and On-Chip Unsupervised Learning.一种基于静态随机存取存储器(SRAM)内存处理宏单元和片上无监督学习实现的、具有基于脉冲时间的可塑性的面积和能源高效脉冲神经网络。
IEEE Trans Biomed Circuits Syst. 2023 Feb;17(1):92-104. doi: 10.1109/TBCAS.2023.3242413.
5
Supervised Learning in All FeFET-Based Spiking Neural Network: Opportunities and Challenges.基于全铁电场效应晶体管的脉冲神经网络中的监督学习:机遇与挑战。
Front Neurosci. 2020 Jun 24;14:634. doi: 10.3389/fnins.2020.00634. eCollection 2020.
6
Early Termination Based Training Acceleration for an Energy-Efficient SNN Processor Design.基于早期终止的训练加速,用于高能效脉冲神经网络(SNN)处理器设计。
IEEE Trans Biomed Circuits Syst. 2022 Jun;16(3):442-455. doi: 10.1109/TBCAS.2022.3181808. Epub 2022 Jul 12.
7
MorphIC: A 65-nm 738k-Synapse/mm Quad-Core Binary-Weight Digital Neuromorphic Processor With Stochastic Spike-Driven Online Learning.MorphIC:具有随机尖峰驱动在线学习功能的 65nm 738k 突触/mm 四核二进制权数字神经形态处理器。
IEEE Trans Biomed Circuits Syst. 2019 Oct;13(5):999-1010. doi: 10.1109/TBCAS.2019.2928793. Epub 2019 Jul 15.
8
A 0.086-mm 12.7-pJ/SOP 64k-Synapse 256-Neuron Online-Learning Digital Spiking Neuromorphic Processor in 28-nm CMOS.在 28nmCMOS 中,实现了一款 0.086mm²、12.7pJ/SOP、64k 突触、256 神经元、在线学习、数字尖峰神经形态处理器。
IEEE Trans Biomed Circuits Syst. 2019 Feb;13(1):145-158. doi: 10.1109/TBCAS.2018.2880425. Epub 2018 Nov 9.
9
A Low-Power Spiking Neural Network Chip Based on a Compact LIF Neuron and Binary Exponential Charge Injector Synapse Circuits.基于紧凑型 LIF 神经元和二进制指数电荷注入突触电路的低功耗尖峰神经网络芯片。
Sensors (Basel). 2021 Jun 29;21(13):4462. doi: 10.3390/s21134462.
10
Competitive Learning in a Spiking Neural Network: Towards an Intelligent Pattern Classifier.尖峰神经网络中的竞争学习:迈向智能模式分类器。
Sensors (Basel). 2020 Jan 16;20(2):500. doi: 10.3390/s20020500.

引用本文的文献

1
Analog Convolutional Operator Circuit for Low-Power Mixed-Signal CNN Processing Chip.用于低功耗混合信号卷积神经网络处理芯片的模拟卷积算子电路
Sensors (Basel). 2023 Dec 4;23(23):9612. doi: 10.3390/s23239612.
2
Complex spiking neural networks with synaptic time-delay based on anti-interference function.基于抗干扰功能的具有突触时延的复杂脉冲神经网络。
Cogn Neurodyn. 2022 Dec;16(6):1485-1503. doi: 10.1007/s11571-022-09803-4. Epub 2022 Apr 15.
3
Spiking Neural Network with Linear Computational Complexity for Waveform Analysis in Amperometry.
具有线性计算复杂度的尖峰神经网络用于安培法中的波形分析。
Sensors (Basel). 2021 May 10;21(9):3276. doi: 10.3390/s21093276.