• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

EnforceSNN:考虑用于嵌入式系统的近似动态随机存取存储器,实现弹性且节能的脉冲神经网络推理。

EnforceSNN: Enabling resilient and energy-efficient spiking neural network inference considering approximate DRAMs for embedded systems.

作者信息

Putra Rachmad Vidya Wicaksana, Hanif Muhammad Abdullah, Shafique Muhammad

机构信息

Embedded Computing Systems, Institute of Computer Engineering, Technische Universität Wien, Vienna, Austria.

eBrain Lab, Division of Engineering, New York University Abu Dhabi (NYUAD), Abu Dhabi, United Arab Emirates.

出版信息

Front Neurosci. 2022 Aug 10;16:937782. doi: 10.3389/fnins.2022.937782. eCollection 2022.

DOI:10.3389/fnins.2022.937782
PMID:36033624
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9399768/
Abstract

Spiking Neural Networks (SNNs) have shown capabilities of achieving high accuracy under unsupervised settings and low operational power/energy due to their bio-plausible computations. Previous studies identified that DRAM-based off-chip memory accesses dominate the energy consumption of SNN processing. However, state-of-the-art works do not optimize the DRAM energy-per-access, thereby hindering the SNN-based systems from achieving further energy efficiency gains. To substantially reduce the DRAM energy-per-access, an effective solution is to decrease the DRAM supply voltage, but it may lead to errors in DRAM cells (i.e., so-called ). Toward this, we propose , a novel design framework that provides a solution for resilient and energy-efficient SNN inference using reduced-voltage DRAM for embedded systems. The key mechanisms of our EnforceSNN are: (1) employing quantized weights to reduce the DRAM access energy; (2) devising an efficient DRAM mapping policy to minimize the DRAM energy-per-access; (3) analyzing the SNN error tolerance to understand its accuracy profile considering different bit error rate (BER) values; (4) leveraging the information for developing an efficient fault-aware training (FAT) that considers different BER values and bit error locations in DRAM to improve the SNN error tolerance; and (5) developing an algorithm to select the SNN model that offers good trade-offs among accuracy, memory, and energy consumption. The experimental results show that our EnforceSNN maintains the accuracy (i.e., no accuracy loss for ≤ 10) as compared to the baseline SNN with accurate DRAM while achieving up to 84.9% of DRAM energy saving and up to 4.1x speed-up of DRAM data throughput across different network sizes.

摘要

脉冲神经网络(SNN)由于其具有生物合理性的计算方式,已展现出在无监督设置下实现高精度以及低运算功耗/能量的能力。先前的研究表明,基于动态随机存取存储器(DRAM)的片外内存访问在SNN处理的能量消耗中占主导地位。然而,当前的前沿研究并未对DRAM每次访问的能量进行优化,从而阻碍了基于SNN的系统在能源效率上取得进一步提升。为了大幅降低DRAM每次访问的能量,一个有效的解决方案是降低DRAM的电源电压,但这可能会导致DRAM单元出现错误(即所谓的 )。针对这一问题,我们提出了EnforceSNN,这是一种新颖的设计框架,它为使用低电压DRAM的嵌入式系统提供了一种实现弹性且节能的SNN推理的解决方案。我们的EnforceSNN的关键机制包括:(1)采用量化权重以降低DRAM访问能量;(2)设计一种高效的DRAM映射策略,以最小化DRAM每次访问的能量;(3)分析SNN的容错能力,以了解在考虑不同误码率(BER)值时其精度概况;(4)利用这些信息来开发一种高效的故障感知训练(FAT),该训练考虑DRAM中不同的BER值和误码位置,以提高SNN的容错能力;(5)开发一种算法,以选择在精度、内存和能耗之间提供良好权衡的SNN模型。实验结果表明,与具有精确DRAM的基线SNN相比,我们的EnforceSNN保持了精度(即对于≤10没有精度损失),同时在不同网络规模下实现了高达84.9%的DRAM节能以及高达4.1倍的DRAM数据吞吐量加速。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/0d6d080e9e9e/fnins-16-937782-g0015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/d8f1ae9b2043/fnins-16-937782-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/d70c3c83d99b/fnins-16-937782-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/9459dcacd47c/fnins-16-937782-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/1eccf35ec3ab/fnins-16-937782-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/1b18b9634911/fnins-16-937782-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/fff21fc13945/fnins-16-937782-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/d1c831c3beba/fnins-16-937782-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/336b74c1a096/fnins-16-937782-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/cca4e58862b1/fnins-16-937782-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/0c18a79a31b7/fnins-16-937782-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/c86a7d7763a4/fnins-16-937782-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/458f8443fe13/fnins-16-937782-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/9db453439c17/fnins-16-937782-g0013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/f96371a92ae6/fnins-16-937782-g0014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/0d6d080e9e9e/fnins-16-937782-g0015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/d8f1ae9b2043/fnins-16-937782-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/d70c3c83d99b/fnins-16-937782-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/9459dcacd47c/fnins-16-937782-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/1eccf35ec3ab/fnins-16-937782-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/1b18b9634911/fnins-16-937782-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/fff21fc13945/fnins-16-937782-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/d1c831c3beba/fnins-16-937782-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/336b74c1a096/fnins-16-937782-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/cca4e58862b1/fnins-16-937782-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/0c18a79a31b7/fnins-16-937782-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/c86a7d7763a4/fnins-16-937782-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/458f8443fe13/fnins-16-937782-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/9db453439c17/fnins-16-937782-g0013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/f96371a92ae6/fnins-16-937782-g0014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c985/9399768/0d6d080e9e9e/fnins-16-937782-g0015.jpg

相似文献

1
EnforceSNN: Enabling resilient and energy-efficient spiking neural network inference considering approximate DRAMs for embedded systems.EnforceSNN:考虑用于嵌入式系统的近似动态随机存取存储器,实现弹性且节能的脉冲神经网络推理。
Front Neurosci. 2022 Aug 10;16:937782. doi: 10.3389/fnins.2022.937782. eCollection 2022.
2
RescueSNN: enabling reliable executions on spiking neural network accelerators under permanent faults.RescueSNN:在永久性故障下实现脉冲神经网络加速器的可靠执行。
Front Neurosci. 2023 Apr 12;17:1159440. doi: 10.3389/fnins.2023.1159440. eCollection 2023.
3
SpQuant-SNN: ultra-low precision membrane potential with sparse activations unlock the potential of on-device spiking neural networks applications.SpQuant-SNN:具有稀疏激活的超低精度膜电位开启了片上脉冲神经网络应用的潜力。
Front Neurosci. 2024 Sep 4;18:1440000. doi: 10.3389/fnins.2024.1440000. eCollection 2024.
4
SNN4Agents: a framework for developing energy-efficient embodied spiking neural networks for autonomous agents.SNN4Agents:用于为自主智能体开发节能型具身脉冲神经网络的框架。
Front Robot AI. 2024 Jul 26;11:1401677. doi: 10.3389/frobt.2024.1401677. eCollection 2024.
5
A Tandem Learning Rule for Effective Training and Rapid Inference of Deep Spiking Neural Networks.一种用于深度脉冲神经网络有效训练和快速推理的串联学习规则。
IEEE Trans Neural Netw Learn Syst. 2023 Jan;34(1):446-460. doi: 10.1109/TNNLS.2021.3095724. Epub 2023 Jan 5.
6
A Little Energy Goes a Long Way: Build an Energy-Efficient, Accurate Spiking Neural Network From Convolutional Neural Network.一点能量发挥大作用:从卷积神经网络构建节能且准确的脉冲神经网络。
Front Neurosci. 2022 May 26;16:759900. doi: 10.3389/fnins.2022.759900. eCollection 2022.
7
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training.SSTDP:用于高效脉冲神经网络训练的监督式脉冲时间依赖可塑性
Front Neurosci. 2021 Nov 4;15:756876. doi: 10.3389/fnins.2021.756876. eCollection 2021.
8
High-accuracy deep ANN-to-SNN conversion using quantization-aware training framework and calcium-gated bipolar leaky integrate and fire neuron.使用量化感知训练框架和钙门控双极泄漏积分发放神经元实现高精度深度人工神经网络到脉冲神经网络的转换。
Front Neurosci. 2023 Mar 8;17:1141701. doi: 10.3389/fnins.2023.1141701. eCollection 2023.
9
SmartDeal: Remodeling Deep Network Weights for Efficient Inference and Training.智能交易:重塑深度网络权重以实现高效推理与训练
IEEE Trans Neural Netw Learn Syst. 2023 Oct;34(10):7099-7113. doi: 10.1109/TNNLS.2021.3138056. Epub 2023 Oct 5.
10
SPIDEN: deep Spiking Neural Networks for efficient image denoising.SPIDEN:用于高效图像去噪的深度脉冲神经网络。
Front Neurosci. 2023 Aug 11;17:1224457. doi: 10.3389/fnins.2023.1224457. eCollection 2023.

引用本文的文献

1
SNN4Agents: a framework for developing energy-efficient embodied spiking neural networks for autonomous agents.SNN4Agents:用于为自主智能体开发节能型具身脉冲神经网络的框架。
Front Robot AI. 2024 Jul 26;11:1401677. doi: 10.3389/frobt.2024.1401677. eCollection 2024.
2
RescueSNN: enabling reliable executions on spiking neural network accelerators under permanent faults.RescueSNN:在永久性故障下实现脉冲神经网络加速器的可靠执行。
Front Neurosci. 2023 Apr 12;17:1159440. doi: 10.3389/fnins.2023.1159440. eCollection 2023.
3
Exploring Optimized Spiking Neural Network Architectures for Classification Tasks on Embedded Platforms.

本文引用的文献

1
Stochastic rounding and reduced-precision fixed-point arithmetic for solving neural ordinary differential equations.随机舍入和降低精度定点算术在求解神经常微分方程中的应用。
Philos Trans A Math Phys Eng Sci. 2020 Mar 6;378(2166):20190052. doi: 10.1098/rsta.2019.0052. Epub 2020 Jan 20.
2
SpykeTorch: Efficient Simulation of Convolutional Spiking Neural Networks With at Most One Spike per Neuron.SpykeTorch:每个神经元最多一个脉冲的卷积脉冲神经网络的高效模拟
Front Neurosci. 2019 Jul 12;13:625. doi: 10.3389/fnins.2019.00625. eCollection 2019.
3
MorphIC: A 65-nm 738k-Synapse/mm Quad-Core Binary-Weight Digital Neuromorphic Processor With Stochastic Spike-Driven Online Learning.
探索用于嵌入式平台上分类任务的优化尖峰神经网络架构。
Sensors (Basel). 2021 May 7;21(9):3240. doi: 10.3390/s21093240.
MorphIC:具有随机尖峰驱动在线学习功能的 65nm 738k 突触/mm 四核二进制权数字神经形态处理器。
IEEE Trans Biomed Circuits Syst. 2019 Oct;13(5):999-1010. doi: 10.1109/TBCAS.2019.2928793. Epub 2019 Jul 15.
4
Deep learning in spiking neural networks.深度学习在尖峰神经网络中的应用。
Neural Netw. 2019 Mar;111:47-63. doi: 10.1016/j.neunet.2018.12.002. Epub 2018 Dec 18.
5
BindsNET: A Machine Learning-Oriented Spiking Neural Networks Library in Python.BindsNET:一个面向机器学习的Python脉冲神经网络库。
Front Neuroinform. 2018 Dec 12;12:89. doi: 10.3389/fninf.2018.00089. eCollection 2018.
6
A 0.086-mm 12.7-pJ/SOP 64k-Synapse 256-Neuron Online-Learning Digital Spiking Neuromorphic Processor in 28-nm CMOS.在 28nmCMOS 中,实现了一款 0.086mm²、12.7pJ/SOP、64k 突触、256 神经元、在线学习、数字尖峰神经形态处理器。
IEEE Trans Biomed Circuits Syst. 2019 Feb;13(1):145-158. doi: 10.1109/TBCAS.2018.2880425. Epub 2018 Nov 9.
7
Deep Learning With Spiking Neurons: Opportunities and Challenges.基于脉冲神经元的深度学习:机遇与挑战。
Front Neurosci. 2018 Oct 25;12:774. doi: 10.3389/fnins.2018.00774. eCollection 2018.
8
Unsupervised learning of digit recognition using spike-timing-dependent plasticity.使用基于脉冲时间依赖可塑性的无监督数字识别学习。
Front Comput Neurosci. 2015 Aug 3;9:99. doi: 10.3389/fncom.2015.00099. eCollection 2015.
9
Spike-phase coding boosts and stabilizes information carried by spatial and temporal spike patterns.峰值相位编码增强并稳定了由空间和时间峰值模式携带的信息。
Neuron. 2009 Feb 26;61(4):597-608. doi: 10.1016/j.neuron.2009.01.008.
10
Which model to use for cortical spiking neurons?对于皮层发放神经元应使用哪种模型?
IEEE Trans Neural Netw. 2004 Sep;15(5):1063-70. doi: 10.1109/TNN.2004.832719.