• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

MorphIC:具有随机尖峰驱动在线学习功能的 65nm 738k 突触/mm 四核二进制权数字神经形态处理器。

MorphIC: A 65-nm 738k-Synapse/mm Quad-Core Binary-Weight Digital Neuromorphic Processor With Stochastic Spike-Driven Online Learning.

出版信息

IEEE Trans Biomed Circuits Syst. 2019 Oct;13(5):999-1010. doi: 10.1109/TBCAS.2019.2928793. Epub 2019 Jul 15.

DOI:10.1109/TBCAS.2019.2928793
PMID:31329562
Abstract

Recent trends in the field of neural network accelerators investigate weight quantization as a means to increase the resource- and power-efficiency of hardware devices. As full on-chip weight storage is necessary to avoid the high energy cost of off-chip memory accesses, memory reduction requirements for weight storage pushed toward the use of binary weights, which were demonstrated to have a limited accuracy reduction on many applications when quantization-aware training techniques are used. In parallel, spiking neural network (SNN) architectures are explored to further reduce power when processing sparse event-based data streams, while on-chip spike-based online learning appears as a key feature for applications constrained in power and resources during the training phase. However, designing power- and area-efficient SNNs still requires the development of specific techniques in order to leverage on-chip online learning on binary weights without compromising the synapse density. In this paper, we demonstrate MorphIC, a quad-core binary-weight digital neuromorphic processor embedding a stochastic version of the spike-driven synaptic plasticity (S-SDSP) learning rule and a hierarchical routing fabric for large-scale chip interconnection. The MorphIC SNN processor embeds a total of 2k leaky integrate-and-fire (LIF) neurons and more than two million plastic synapses for an active silicon area of 2.86 mm in 65-nm CMOS, achieving a high density of 738k synapses/mm . MorphIC demonstrates an order-of-magnitude improvement in the area-accuracy tradeoff on the MNIST classification task compared to previously-proposed SNNs, while having no penalty in the energy-accuracy tradeoff.

摘要

近年来,神经网络加速器领域的研究趋势将权重量化作为提高硬件设备资源效率和功率效率的一种手段。由于全片上权重存储对于避免片外存储器访问的高能耗是必要的,因此对于权重存储的减少需求促使使用二进制权重,当使用量化感知训练技术时,这在许多应用中被证明会有限制地降低精度。与此同时,尖峰神经网络(SNN)架构被探索用于进一步降低处理稀疏基于事件数据流时的功耗,而片上基于尖峰的在线学习似乎是在训练阶段受限于功率和资源的应用的关键特征。然而,为了在不影响突触密度的情况下利用片上在线学习二进制权重,设计高能效和高面积效率的 SNN 仍然需要开发特定的技术。在本文中,我们展示了 MorphIC,这是一款四核二进制权重数字神经形态处理器,嵌入了基于尖峰的突触可塑性(S-SDSP)学习规则的随机版本和用于大规模芯片互连的分层路由结构。MorphIC SNN 处理器总共嵌入了 2k 个漏电积分和放电(LIF)神经元和超过两百万个可塑突触,在 65nm CMOS 中采用 2.86mm 的有源硅面积,实现了 738k 个突触/mm 的高密度。与以前提出的 SNN 相比,MorphIC 在 MNIST 分类任务中的面积精度权衡方面取得了数量级的改进,而在能量精度权衡方面没有任何代价。

相似文献

1
MorphIC: A 65-nm 738k-Synapse/mm Quad-Core Binary-Weight Digital Neuromorphic Processor With Stochastic Spike-Driven Online Learning.MorphIC:具有随机尖峰驱动在线学习功能的 65nm 738k 突触/mm 四核二进制权数字神经形态处理器。
IEEE Trans Biomed Circuits Syst. 2019 Oct;13(5):999-1010. doi: 10.1109/TBCAS.2019.2928793. Epub 2019 Jul 15.
2
A 0.086-mm 12.7-pJ/SOP 64k-Synapse 256-Neuron Online-Learning Digital Spiking Neuromorphic Processor in 28-nm CMOS.在 28nmCMOS 中,实现了一款 0.086mm²、12.7pJ/SOP、64k 突触、256 神经元、在线学习、数字尖峰神经形态处理器。
IEEE Trans Biomed Circuits Syst. 2019 Feb;13(1):145-158. doi: 10.1109/TBCAS.2018.2880425. Epub 2018 Nov 9.
3
A Low-Power Spiking Neural Network Chip Based on a Compact LIF Neuron and Binary Exponential Charge Injector Synapse Circuits.基于紧凑型 LIF 神经元和二进制指数电荷注入突触电路的低功耗尖峰神经网络芯片。
Sensors (Basel). 2021 Jun 29;21(13):4462. doi: 10.3390/s21134462.
4
A Probabilistic Synapse With Strained MTJs for Spiking Neural Networks.具有应变 MTJ 的概率型突触用于尖峰神经网络。
IEEE Trans Neural Netw Learn Syst. 2020 Apr;31(4):1113-1123. doi: 10.1109/TNNLS.2019.2917819. Epub 2019 Jun 18.
5
A biologically plausible supervised learning method for spiking neural networks using the symmetric STDP rule.基于对称 STDP 规则的尖峰神经网络的生物合理有监督学习方法。
Neural Netw. 2020 Jan;121:387-395. doi: 10.1016/j.neunet.2019.09.007. Epub 2019 Sep 27.
6
Magnetic Tunnel Junction Based Long-Term Short-Term Stochastic Synapse for a Spiking Neural Network with On-Chip STDP Learning.基于磁隧道结的长短期随机突触的 Spike 神经网络及其片上 STDP 学习。
Sci Rep. 2016 Jul 13;6:29545. doi: 10.1038/srep29545.
7
Efficient Synapse Memory Structure for Reconfigurable Digital Neuromorphic Hardware.用于可重构数字神经形态硬件的高效突触存储器结构
Front Neurosci. 2018 Nov 20;12:829. doi: 10.3389/fnins.2018.00829. eCollection 2018.
8
A Neuromorphic Processing System With Spike-Driven SNN Processor for Wearable ECG Classification.用于可穿戴 ECG 分类的具有基于 Spike 的 SNN 处理器的神经形态处理系统。
IEEE Trans Biomed Circuits Syst. 2022 Aug;16(4):511-523. doi: 10.1109/TBCAS.2022.3189364. Epub 2022 Oct 12.
9
Supervised Learning in All FeFET-Based Spiking Neural Network: Opportunities and Challenges.基于全铁电场效应晶体管的脉冲神经网络中的监督学习:机遇与挑战。
Front Neurosci. 2020 Jun 24;14:634. doi: 10.3389/fnins.2020.00634. eCollection 2020.
10
Spike Counts Based Low Complexity SNN Architecture With Binary Synapse.基于 Spike 计数的低复杂度 SNN 架构与二进制突触。
IEEE Trans Biomed Circuits Syst. 2019 Dec;13(6):1664-1677. doi: 10.1109/TBCAS.2019.2945406. Epub 2019 Oct 4.

引用本文的文献

1
A tunable multi-timescale Indium-Gallium-Zinc-Oxide thin-film transistor neuron towards hybrid solutions for spiking neuromorphic applications.一种用于尖峰神经形态应用的混合解决方案的可调谐多时间尺度铟镓锌氧化物薄膜晶体管神经元。
Commun Eng. 2024 Jul 23;3(1):102. doi: 10.1038/s44172-024-00248-7.
2
Direct training high-performance deep spiking neural networks: a review of theories and methods.直接训练高性能深度脉冲神经网络:理论与方法综述
Front Neurosci. 2024 Jul 31;18:1383844. doi: 10.3389/fnins.2024.1383844. eCollection 2024.
3
Impact of spiking neurons leakages and network recurrences on event-based spatio-temporal pattern recognition.
脉冲神经元泄漏和网络递归对基于事件的时空模式识别的影响。
Front Neurosci. 2023 Nov 24;17:1244675. doi: 10.3389/fnins.2023.1244675. eCollection 2023.
4
A 22-pJ/spike 73-Mspikes/s 130k-compartment neural array transceiver with conductance-based synaptic and membrane dynamics.一款具有基于电导的突触和膜动力学的22皮焦/尖峰、73兆尖峰/秒、13万个神经元胞体的神经阵列收发器。
Front Neurosci. 2023 Aug 28;17:1198306. doi: 10.3389/fnins.2023.1198306. eCollection 2023.
5
Self-organization of an inhomogeneous memristive hardware for sequence learning.用于序列学习的非均匀忆阻硬件的自组织。
Nat Commun. 2022 Oct 2;13(1):5793. doi: 10.1038/s41467-022-33476-6.
6
EnforceSNN: Enabling resilient and energy-efficient spiking neural network inference considering approximate DRAMs for embedded systems.EnforceSNN:考虑用于嵌入式系统的近似动态随机存取存储器,实现弹性且节能的脉冲神经网络推理。
Front Neurosci. 2022 Aug 10;16:937782. doi: 10.3389/fnins.2022.937782. eCollection 2022.
7
SAM: A Unified Self-Adaptive Multicompartmental Spiking Neuron Model for Learning With Working Memory.SAM:一种用于工作记忆学习的统一自适应多室脉冲神经元模型。
Front Neurosci. 2022 Apr 18;16:850945. doi: 10.3389/fnins.2022.850945. eCollection 2022.
8
The BrainScaleS-2 Accelerated Neuromorphic System With Hybrid Plasticity.具有混合可塑性的BrainScaleS-2加速神经形态系统
Front Neurosci. 2022 Feb 24;16:795876. doi: 10.3389/fnins.2022.795876. eCollection 2022.
9
Surrogate gradients for analog neuromorphic computing.模拟神经形态计算的替代梯度。
Proc Natl Acad Sci U S A. 2022 Jan 25;119(4). doi: 10.1073/pnas.2109194119.
10
Effective Plug-Ins for Reducing Inference-Latency of Spiking Convolutional Neural Networks During Inference Phase.用于在推理阶段降低脉冲卷积神经网络推理延迟的有效插件
Front Comput Neurosci. 2021 Oct 18;15:697469. doi: 10.3389/fncom.2021.697469. eCollection 2021.