• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

RescueSNN:在永久性故障下实现脉冲神经网络加速器的可靠执行。

RescueSNN: enabling reliable executions on spiking neural network accelerators under permanent faults.

作者信息

Putra Rachmad Vidya Wicaksana, Hanif Muhammad Abdullah, Shafique Muhammad

机构信息

Embedded Computing Systems, Institute of Computer Engineering, Technische Universität Wien (TU Wien), Vienna, Austria.

eBrain Lab, Division of Engineering, New York University Abu Dhabi (NYUAD), Abu Dhabi, United Arab Emirates.

出版信息

Front Neurosci. 2023 Apr 12;17:1159440. doi: 10.3389/fnins.2023.1159440. eCollection 2023.

DOI:10.3389/fnins.2023.1159440
PMID:37123371
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10130579/
Abstract

To maximize the performance and energy efficiency of Spiking Neural Network (SNN) processing on resource-constrained embedded systems, specialized hardware accelerators/chips are employed. However, these SNN chips may suffer from permanent faults which can affect the functionality of weight memory and neuron behavior, thereby causing potentially significant accuracy degradation and system malfunctioning. Such permanent faults may come from manufacturing defects during the fabrication process, and/or from device/transistor damages (e.g., due to wear out) during the run-time operation. However, the impact of permanent faults in SNN chips and the respective mitigation techniques have not been thoroughly investigated yet. Toward this, we propose RescueSNN, a novel methodology to mitigate permanent faults in the compute engine of SNN chips retraining, thereby significantly cutting down the design time and retraining costs, while maintaining the throughput and quality. The key ideas of our RescueSNN methodology are (1) analyzing the characteristics of SNN under permanent faults; (2) leveraging this analysis to improve the SNN fault-tolerance through effective fault-aware mapping (FAM); and (3) devising lightweight hardware enhancements to support FAM. Our FAM technique leverages the fault map of SNN compute engine for (i) minimizing weight corruption when mapping weight bits on the faulty memory cells, and (ii) selectively employing faulty neurons that do not cause significant accuracy degradation to maintain accuracy and throughput, while considering the SNN operations and processing dataflow. The experimental results show that our RescueSNN improves accuracy by up to 80% while maintaining the throughput reduction below 25% in high fault rate (e.g., 0.5 of the potential fault locations), as compared to running SNNs on the faulty chip without mitigation. In this manner, the embedded systems that employ RescueSNN-enhanced chips can efficiently ensure reliable executions against permanent faults during their operational lifetime.

摘要

为了在资源受限的嵌入式系统上最大化脉冲神经网络(SNN)处理的性能和能源效率,人们采用了专门的硬件加速器/芯片。然而,这些SNN芯片可能会遭受永久性故障,这会影响权重存储器的功能和神经元行为,从而导致潜在的显著精度下降和系统故障。这种永久性故障可能源于制造过程中的制造缺陷,和/或运行时操作期间的器件/晶体管损坏(例如,由于磨损)。然而,SNN芯片中永久性故障的影响以及相应的缓解技术尚未得到充分研究。为此,我们提出了RescueSNN,这是一种减轻SNN芯片计算引擎中永久性故障的新颖方法——重新训练,从而显著缩短设计时间和重新训练成本,同时保持吞吐量和质量。我们的RescueSNN方法的关键思想是:(1)分析永久性故障下SNN的特征;(2)利用这一分析通过有效的故障感知映射(FAM)提高SNN的容错能力;(3)设计轻量级硬件增强来支持FAM。我们的FAM技术利用SNN计算引擎的故障映射来:(i)在将权重位映射到故障存储单元时最小化权重损坏,以及(ii)在考虑SNN操作和处理数据流的同时,有选择地使用不会导致显著精度下降的故障神经元来保持精度和吞吐量。实验结果表明,与在未缓解的故障芯片上运行SNN相比,我们的RescueSNN在高故障率(例如,潜在故障位置的0.5)下将精度提高了80%,同时将吞吐量降低保持在25%以下。通过这种方式,采用RescueSNN增强芯片的嵌入式系统能够在其运行寿命期间有效地确保针对永久性故障的可靠执行。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/b01767e7d103/fnins-17-1159440-g0015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/9a1a26af1806/fnins-17-1159440-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/3788980a4019/fnins-17-1159440-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/99974d7d1c36/fnins-17-1159440-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/68a493f77e5e/fnins-17-1159440-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/f450079853c4/fnins-17-1159440-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/ea479c546f7f/fnins-17-1159440-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/bbf9be7d8645/fnins-17-1159440-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/85f9abb74cc1/fnins-17-1159440-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/fbdc61b2fecb/fnins-17-1159440-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/2ab5c1e6e624/fnins-17-1159440-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/9ec5673e92f8/fnins-17-1159440-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/5881e9e8e162/fnins-17-1159440-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/24cf14960671/fnins-17-1159440-g0013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/ba216673b5a4/fnins-17-1159440-g0014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/b01767e7d103/fnins-17-1159440-g0015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/9a1a26af1806/fnins-17-1159440-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/3788980a4019/fnins-17-1159440-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/99974d7d1c36/fnins-17-1159440-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/68a493f77e5e/fnins-17-1159440-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/f450079853c4/fnins-17-1159440-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/ea479c546f7f/fnins-17-1159440-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/bbf9be7d8645/fnins-17-1159440-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/85f9abb74cc1/fnins-17-1159440-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/fbdc61b2fecb/fnins-17-1159440-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/2ab5c1e6e624/fnins-17-1159440-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/9ec5673e92f8/fnins-17-1159440-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/5881e9e8e162/fnins-17-1159440-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/24cf14960671/fnins-17-1159440-g0013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/ba216673b5a4/fnins-17-1159440-g0014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9f9/10130579/b01767e7d103/fnins-17-1159440-g0015.jpg

相似文献

1
RescueSNN: enabling reliable executions on spiking neural network accelerators under permanent faults.RescueSNN:在永久性故障下实现脉冲神经网络加速器的可靠执行。
Front Neurosci. 2023 Apr 12;17:1159440. doi: 10.3389/fnins.2023.1159440. eCollection 2023.
2
EnforceSNN: Enabling resilient and energy-efficient spiking neural network inference considering approximate DRAMs for embedded systems.EnforceSNN:考虑用于嵌入式系统的近似动态随机存取存储器,实现弹性且节能的脉冲神经网络推理。
Front Neurosci. 2022 Aug 10;16:937782. doi: 10.3389/fnins.2022.937782. eCollection 2022.
3
SalvageDNN: salvaging deep neural network accelerators with permanent faults through saliency-driven fault-aware mapping.通过显著驱动的故障感知映射来利用永久性故障挽救深度神经网络加速器。
Philos Trans A Math Phys Eng Sci. 2020 Feb 7;378(2164):20190164. doi: 10.1098/rsta.2019.0164. Epub 2019 Dec 23.
4
SpQuant-SNN: ultra-low precision membrane potential with sparse activations unlock the potential of on-device spiking neural networks applications.SpQuant-SNN:具有稀疏激活的超低精度膜电位开启了片上脉冲神经网络应用的潜力。
Front Neurosci. 2024 Sep 4;18:1440000. doi: 10.3389/fnins.2024.1440000. eCollection 2024.
5
Toward Robust Cognitive 3D Brain-Inspired Cross-Paradigm System.迈向强大的认知3D脑启发式跨范式系统。
Front Neurosci. 2021 Jun 25;15:690208. doi: 10.3389/fnins.2021.690208. eCollection 2021.
6
Fault-Tolerant Attitude Tracking Control Driven by Spiking NNs for Unmanned Aerial Vehicles.基于脉冲神经网络驱动的无人机容错姿态跟踪控制
IEEE Trans Neural Netw Learn Syst. 2025 Feb;36(2):3773-3785. doi: 10.1109/TNNLS.2023.3342078. Epub 2025 Feb 6.
7
An FPGA implementation of Bayesian inference with spiking neural networks.基于脉冲神经网络的贝叶斯推理的现场可编程门阵列实现。
Front Neurosci. 2024 Jan 5;17:1291051. doi: 10.3389/fnins.2023.1291051. eCollection 2023.
8
Boosting Throughput and Efficiency of Hardware Spiking Neural Accelerators Using Time Compression Supporting Multiple Spike Codes.使用支持多种脉冲编码的时间压缩提高硬件脉冲神经网络加速器的吞吐量和效率。
Front Neurosci. 2020 Feb 14;14:104. doi: 10.3389/fnins.2020.00104. eCollection 2020.
9
Chip-In-Loop SNN Proxy Learning: a new method for efficient training of spiking neural networks.芯片内循环脉冲神经网络代理学习:一种高效训练脉冲神经网络的新方法。
Front Neurosci. 2024 Jan 4;17:1323121. doi: 10.3389/fnins.2023.1323121. eCollection 2023.
10
On-Chip Training Spiking Neural Networks Using Approximated Backpropagation With Analog Synaptic Devices.使用带有模拟突触器件的近似反向传播的片上训练脉冲神经网络。
Front Neurosci. 2020 Jul 7;14:423. doi: 10.3389/fnins.2020.00423. eCollection 2020.

引用本文的文献

1
SNN4Agents: a framework for developing energy-efficient embodied spiking neural networks for autonomous agents.SNN4Agents:用于为自主智能体开发节能型具身脉冲神经网络的框架。
Front Robot AI. 2024 Jul 26;11:1401677. doi: 10.3389/frobt.2024.1401677. eCollection 2024.

本文引用的文献

1
EnforceSNN: Enabling resilient and energy-efficient spiking neural network inference considering approximate DRAMs for embedded systems.EnforceSNN:考虑用于嵌入式系统的近似动态随机存取存储器,实现弹性且节能的脉冲神经网络推理。
Front Neurosci. 2022 Aug 10;16:937782. doi: 10.3389/fnins.2022.937782. eCollection 2022.
2
On the Self-Repair Role of Astrocytes in STDP Enabled Unsupervised SNNs.关于星形胶质细胞在基于STDP的无监督脉冲神经网络中的自我修复作用
Front Neurosci. 2021 Jan 14;14:603796. doi: 10.3389/fnins.2020.603796. eCollection 2020.
3
SpykeTorch: Efficient Simulation of Convolutional Spiking Neural Networks With at Most One Spike per Neuron.
SpykeTorch:每个神经元最多一个脉冲的卷积脉冲神经网络的高效模拟
Front Neurosci. 2019 Jul 12;13:625. doi: 10.3389/fnins.2019.00625. eCollection 2019.
4
Deep learning in spiking neural networks.深度学习在尖峰神经网络中的应用。
Neural Netw. 2019 Mar;111:47-63. doi: 10.1016/j.neunet.2018.12.002. Epub 2018 Dec 18.
5
BindsNET: A Machine Learning-Oriented Spiking Neural Networks Library in Python.BindsNET:一个面向机器学习的Python脉冲神经网络库。
Front Neuroinform. 2018 Dec 12;12:89. doi: 10.3389/fninf.2018.00089. eCollection 2018.
6
A 0.086-mm 12.7-pJ/SOP 64k-Synapse 256-Neuron Online-Learning Digital Spiking Neuromorphic Processor in 28-nm CMOS.在 28nmCMOS 中,实现了一款 0.086mm²、12.7pJ/SOP、64k 突触、256 神经元、在线学习、数字尖峰神经形态处理器。
IEEE Trans Biomed Circuits Syst. 2019 Feb;13(1):145-158. doi: 10.1109/TBCAS.2018.2880425. Epub 2018 Nov 9.
7
Deep Learning With Spiking Neurons: Opportunities and Challenges.基于脉冲神经元的深度学习:机遇与挑战。
Front Neurosci. 2018 Oct 25;12:774. doi: 10.3389/fnins.2018.00774. eCollection 2018.
8
Unsupervised learning of digit recognition using spike-timing-dependent plasticity.使用基于脉冲时间依赖可塑性的无监督数字识别学习。
Front Comput Neurosci. 2015 Aug 3;9:99. doi: 10.3389/fncom.2015.00099. eCollection 2015.
9
Which model to use for cortical spiking neurons?对于皮层发放神经元应使用哪种模型?
IEEE Trans Neural Netw. 2004 Sep;15(5):1063-70. doi: 10.1109/TNN.2004.832719.