• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于高性能脉冲神经网络的人工神经网络架构基准测试

Benchmarking Artificial Neural Network Architectures for High-Performance Spiking Neural Networks.

作者信息

Islam Riadul, Majurski Patrick, Kwon Jun, Sharma Anurag, Tummala Sri Ranga Sai Krishna

机构信息

Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County, Baltimore, MD 21250, USA.

出版信息

Sensors (Basel). 2024 Feb 19;24(4):1329. doi: 10.3390/s24041329.

DOI:10.3390/s24041329
PMID:38400487
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10892219/
Abstract

Organizations managing high-performance computing systems face a multitude of challenges, including overarching concerns such as overall energy consumption, microprocessor clock frequency limitations, and the escalating costs associated with chip production. Evidently, processor speeds have plateaued over the last decade, persisting within the range of 2 GHz to 5 GHz. Scholars assert that brain-inspired computing holds substantial promise for mitigating these challenges. The spiking neural network (SNN) particularly stands out for its commendable power efficiency when juxtaposed with conventional design paradigms. Nevertheless, our scrutiny has brought to light several pivotal challenges impeding the seamless implementation of large-scale neural networks (NNs) on silicon. These challenges encompass the absence of automated tools, the need for multifaceted domain expertise, and the inadequacy of existing algorithms to efficiently partition and place extensive SNN computations onto hardware infrastructure. In this paper, we posit the development of an automated tool flow capable of transmuting any NN into an SNN. This undertaking involves the creation of a novel graph-partitioning algorithm designed to strategically place SNNs on a network-on-chip (NoC), thereby paving the way for future energy-efficient and high-performance computing paradigms. The presented methodology showcases its effectiveness by successfully transforming ANN architectures into SNNs with a marginal average error penalty of merely 2.65%. The proposed graph-partitioning algorithm enables a 14.22% decrease in inter-synaptic communication and an 87.58% reduction in intra-synaptic communication, on average, underscoring the effectiveness of the proposed algorithm in optimizing NN communication pathways. Compared to a baseline graph-partitioning algorithm, the proposed approach exhibits an average decrease of 79.74% in latency and a 14.67% reduction in energy consumption. Using existing NoC tools, the energy-latency product of SNN architectures is, on average, 82.71% lower than that of the baseline architectures.

摘要

管理高性能计算系统的组织面临着众多挑战,包括诸如整体能源消耗、微处理器时钟频率限制以及与芯片生产相关的成本不断上升等全局性问题。显然,在过去十年中处理器速度已趋于平稳,一直保持在2吉赫兹至5吉赫兹的范围内。学者们断言,受大脑启发的计算对于缓解这些挑战具有巨大潜力。与传统设计范式相比,脉冲神经网络(SNN)因其值得称赞的功率效率而格外突出。然而,我们的研究发现了几个阻碍在硅片上无缝实现大规模神经网络(NN)的关键挑战。这些挑战包括缺乏自动化工具、需要多方面的领域专业知识,以及现有算法不足以有效地将大量SNN计算分区并放置到硬件基础设施上。在本文中,我们提出开发一种能够将任何NN转换为SNN的自动化工具流程。这项工作涉及创建一种新颖的图分区算法,旨在将SNN策略性地放置在片上网络(NoC)上,从而为未来节能和高性能计算范式铺平道路。所提出的方法通过成功将人工神经网络(ANN)架构转换为SNN,平均误差惩罚仅为2.65%,展示了其有效性。所提出的图分区算法平均可使突触间通信减少14.22%,突触内通信减少87.58%,突出了该算法在优化NN通信路径方面的有效性。与基线图分区算法相比,所提出的方法平均延迟降低79.74%,能耗降低14.67%。使用现有的NoC工具,SNN架构的能量延迟积平均比基线架构低82.71%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e4f9/10892219/2be3dcb88d0f/sensors-24-01329-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e4f9/10892219/c1345bd24a73/sensors-24-01329-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e4f9/10892219/ec62fa7b5ef2/sensors-24-01329-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e4f9/10892219/a41a86ba3515/sensors-24-01329-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e4f9/10892219/71a4396b2d90/sensors-24-01329-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e4f9/10892219/41a20cdf53c9/sensors-24-01329-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e4f9/10892219/2be3dcb88d0f/sensors-24-01329-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e4f9/10892219/c1345bd24a73/sensors-24-01329-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e4f9/10892219/ec62fa7b5ef2/sensors-24-01329-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e4f9/10892219/a41a86ba3515/sensors-24-01329-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e4f9/10892219/71a4396b2d90/sensors-24-01329-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e4f9/10892219/41a20cdf53c9/sensors-24-01329-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e4f9/10892219/2be3dcb88d0f/sensors-24-01329-g006.jpg

相似文献

1
Benchmarking Artificial Neural Network Architectures for High-Performance Spiking Neural Networks.用于高性能脉冲神经网络的人工神经网络架构基准测试
Sensors (Basel). 2024 Feb 19;24(4):1329. doi: 10.3390/s24041329.
2
EdgeMap: An Optimized Mapping Toolchain for Spiking Neural Network in Edge Computing.EdgeMap:一种用于边缘计算中脉冲神经网络的优化映射工具链。
Sensors (Basel). 2023 Jul 20;23(14):6548. doi: 10.3390/s23146548.
3
A Little Energy Goes a Long Way: Build an Energy-Efficient, Accurate Spiking Neural Network From Convolutional Neural Network.一点能量发挥大作用:从卷积神经网络构建节能且准确的脉冲神经网络。
Front Neurosci. 2022 May 26;16:759900. doi: 10.3389/fnins.2022.759900. eCollection 2022.
4
A 510 μW 0.738-mm 6.2-pJ/SOP Online Learning Multi-Topology SNN Processor With Unified Computation Engine in 40-nm CMOS.一款 510μW、0.738mm²、6.2pJ/SOP 的 40nm CMOS 在线学习多拓扑结构 SNN 处理器,具有统一的计算引擎。
IEEE Trans Biomed Circuits Syst. 2023 Jun;17(3):507-520. doi: 10.1109/TBCAS.2023.3279367. Epub 2023 Jul 12.
5
Neuromorphic Sentiment Analysis Using Spiking Neural Networks.基于尖峰神经网络的神经形态情绪分析。
Sensors (Basel). 2023 Sep 6;23(18):7701. doi: 10.3390/s23187701.
6
On-Chip Training Spiking Neural Networks Using Approximated Backpropagation With Analog Synaptic Devices.使用带有模拟突触器件的近似反向传播的片上训练脉冲神经网络。
Front Neurosci. 2020 Jul 7;14:423. doi: 10.3389/fnins.2020.00423. eCollection 2020.
7
A universal ANN-to-SNN framework for achieving high accuracy and low latency deep Spiking Neural Networks.一种通用的 ANN-to-SNN 框架,可实现高精度和低延迟的深度尖峰神经网络。
Neural Netw. 2024 Jun;174:106244. doi: 10.1016/j.neunet.2024.106244. Epub 2024 Mar 15.
8
A Tandem Learning Rule for Effective Training and Rapid Inference of Deep Spiking Neural Networks.一种用于深度脉冲神经网络有效训练和快速推理的串联学习规则。
IEEE Trans Neural Netw Learn Syst. 2023 Jan;34(1):446-460. doi: 10.1109/TNNLS.2021.3095724. Epub 2023 Jan 5.
9
Advancing interconnect density for spiking neural network hardware implementations using traffic-aware adaptive network-on-chip routers.使用基于流量感知的自适应片上网络路由器提高尖峰神经网络硬件实现的互连密度。
Neural Netw. 2012 Sep;33:42-57. doi: 10.1016/j.neunet.2012.04.004. Epub 2012 Apr 23.
10
Optimal Mapping of Spiking Neural Network to Neuromorphic Hardware for Edge-AI.用于边缘 AI 的尖峰神经网络到神经形态硬件的最优映射。
Sensors (Basel). 2022 Sep 24;22(19):7248. doi: 10.3390/s22197248.

本文引用的文献

1
Learnable Leakage and Onset-Spiking Self-Attention in SNNs with Local Error Signals.具有局部误差信号的脉冲神经网络中的可学习泄漏和起始尖峰自注意力
Sensors (Basel). 2023 Dec 12;23(24):9781. doi: 10.3390/s23249781.
2
Analog Convolutional Operator Circuit for Low-Power Mixed-Signal CNN Processing Chip.用于低功耗混合信号卷积神经网络处理芯片的模拟卷积算子电路
Sensors (Basel). 2023 Dec 4;23(23):9612. doi: 10.3390/s23239612.
3
Brian 2, an intuitive and efficient neural simulator.Brian 2,一个直观高效的神经模拟器。
Elife. 2019 Aug 20;8:e47314. doi: 10.7554/eLife.47314.
4
Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification.将连续值深度网络转换为用于图像分类的高效事件驱动网络
Front Neurosci. 2017 Dec 7;11:682. doi: 10.3389/fnins.2017.00682. eCollection 2017.