• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于SpiNNaker 2原型的内存高效深度学习

Memory-Efficient Deep Learning on a SpiNNaker 2 Prototype.

作者信息

Liu Chen, Bellec Guillaume, Vogginger Bernhard, Kappel David, Partzsch Johannes, Neumärker Felix, Höppner Sebastian, Maass Wolfgang, Furber Steve B, Legenstein Robert, Mayr Christian G

机构信息

Chair of Highly-Parallel VLSI-Systems and Neuromorphic Circuits, Department of Electrical Engineering and Information Technology, Institute of Circuits and Systems, Technische Universität Dresden, Dresden, Germany.

Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria.

出版信息

Front Neurosci. 2018 Nov 16;12:840. doi: 10.3389/fnins.2018.00840. eCollection 2018.

DOI:10.3389/fnins.2018.00840
PMID:30505263
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6250847/
Abstract

The memory requirement of deep learning algorithms is considered incompatible with the memory restriction of energy-efficient hardware. A low memory footprint can be achieved by pruning obsolete connections or reducing the precision of connection strengths after the network has been trained. Yet, these techniques are not applicable to the case when neural networks have to be trained directly on hardware due to the hard memory constraints. Deep Rewiring (DEEP R) is a training algorithm which continuously rewires the network while preserving very sparse connectivity all along the training procedure. We apply DEEP R to a deep neural network implementation on a prototype chip of the 2nd generation SpiNNaker system. The local memory of a single core on this chip is limited to 64 KB and a deep network architecture is trained entirely within this constraint without the use of external memory. Throughout training, the proportion of active connections is limited to 1.3%. On the handwritten digits dataset MNIST, this extremely sparse network achieves 96.6% classification accuracy at convergence. Utilizing the multi-processor feature of the SpiNNaker system, we found very good scaling in terms of computation time, per-core memory consumption, and energy constraints. When compared to a X86 CPU implementation, neural network training on the SpiNNaker 2 prototype improves power and energy consumption by two orders of magnitude.

摘要

深度学习算法的内存需求被认为与节能硬件的内存限制不兼容。在网络训练完成后,通过修剪过时的连接或降低连接强度的精度,可以实现较低的内存占用。然而,由于严格的内存限制,当神经网络必须直接在硬件上进行训练时,这些技术并不适用。深度重布线(DEEP R)是一种训练算法,在整个训练过程中,它会不断地对网络进行重布线,同时保持非常稀疏的连接性。我们将深度重布线应用于第二代SpiNNaker系统的原型芯片上的深度神经网络实现。该芯片上单个内核的本地内存限制为64 KB,并且在不使用外部内存的情况下,在这个限制范围内对深度网络架构进行了完整的训练。在整个训练过程中,活跃连接的比例限制在1.3%。在手写数字数据集MNIST上,这个极其稀疏的网络在收敛时达到了96.6%的分类准确率。利用SpiNNaker系统的多处理器特性,我们发现在计算时间、每内核内存消耗和能量限制方面具有非常好的扩展性。与X86 CPU实现相比,在SpiNNaker 2原型上进行神经网络训练可将功耗和能量消耗降低两个数量级。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/698a/6250847/4cb4215a0b20/fnins-12-00840-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/698a/6250847/f80d14d73267/fnins-12-00840-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/698a/6250847/e397bc7aa266/fnins-12-00840-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/698a/6250847/55deb284112b/fnins-12-00840-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/698a/6250847/a04e94e4e8b8/fnins-12-00840-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/698a/6250847/d9f442b9ef00/fnins-12-00840-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/698a/6250847/82a43a3faba2/fnins-12-00840-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/698a/6250847/a83f06e2b333/fnins-12-00840-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/698a/6250847/8f00f21e9622/fnins-12-00840-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/698a/6250847/4cb4215a0b20/fnins-12-00840-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/698a/6250847/f80d14d73267/fnins-12-00840-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/698a/6250847/e397bc7aa266/fnins-12-00840-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/698a/6250847/55deb284112b/fnins-12-00840-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/698a/6250847/a04e94e4e8b8/fnins-12-00840-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/698a/6250847/d9f442b9ef00/fnins-12-00840-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/698a/6250847/82a43a3faba2/fnins-12-00840-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/698a/6250847/a83f06e2b333/fnins-12-00840-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/698a/6250847/8f00f21e9622/fnins-12-00840-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/698a/6250847/4cb4215a0b20/fnins-12-00840-g0011.jpg

相似文献

1
Memory-Efficient Deep Learning on a SpiNNaker 2 Prototype.基于SpiNNaker 2原型的内存高效深度学习
Front Neurosci. 2018 Nov 16;12:840. doi: 10.3389/fnins.2018.00840. eCollection 2018.
2
E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware.SpiNNaker 2上的E-prop:探索神经形态硬件上脉冲循环神经网络中的在线学习。
Front Neurosci. 2022 Nov 28;16:1018006. doi: 10.3389/fnins.2022.1018006. eCollection 2022.
3
DANoC: An Efficient Algorithm and Hardware Codesign of Deep Neural Networks on Chip.DANoC:一种片上深度神经网络的高效算法与硬件协同设计
IEEE Trans Neural Netw Learn Syst. 2018 Jul;29(7):3176-3187. doi: 10.1109/TNNLS.2017.2717442. Epub 2017 Jul 18.
4
Event-driven implementation of deep spiking convolutional neural networks for supervised classification using the SpiNNaker neuromorphic platform.基于 SpiNNaker 神经形态平台的用于监督分类的深度尖峰卷积神经网络的事件驱动实现。
Neural Netw. 2020 Jan;121:319-328. doi: 10.1016/j.neunet.2019.09.008. Epub 2019 Sep 24.
5
Liquid State Machine on SpiNNaker for Spatio-Temporal Classification Tasks.用于时空分类任务的SpiNNaker上的液态机器
Front Neurosci. 2022 Mar 14;16:819063. doi: 10.3389/fnins.2022.819063. eCollection 2022.
6
Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware.基于神经形态硬件的塑性神经网络大规模模拟
Front Neuroanat. 2016 Apr 7;10:37. doi: 10.3389/fnana.2016.00037. eCollection 2016.
7
Efficient Reward-Based Structural Plasticity on a SpiNNaker 2 Prototype.基于 SpiNNaker 2 原型的高效奖励型结构可塑性。
IEEE Trans Biomed Circuits Syst. 2019 Jun;13(3):579-591. doi: 10.1109/TBCAS.2019.2906401. Epub 2019 Mar 27.
8
Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model.用于全尺寸皮质微电路模型的数字神经形态硬件SpiNNaker与神经网络模拟软件NEST的性能比较
Front Neurosci. 2018 May 23;12:291. doi: 10.3389/fnins.2018.00291. eCollection 2018.
9
Efficient SNN multi-cores MAC array acceleration on SpiNNaker 2.基于SpiNNaker 2的高效脉冲神经网络多核乘法累加阵列加速
Front Neurosci. 2023 Aug 7;17:1223262. doi: 10.3389/fnins.2023.1223262. eCollection 2023.
10
Sign backpropagation: An on-chip learning algorithm for analog RRAM neuromorphic computing systems.符号反向传播:一种用于模拟 RRAM 神经形态计算系统的片上学习算法。
Neural Netw. 2018 Dec;108:217-223. doi: 10.1016/j.neunet.2018.08.012. Epub 2018 Sep 1.

引用本文的文献

1
ON-OFF neuromorphic ISING machines using Fowler-Nordheim annealers.使用福勒-诺德海姆退火器的开-关神经形态伊辛机。
Nat Commun. 2025 Mar 31;16(1):3086. doi: 10.1038/s41467-025-58231-5.
2
Research on Anti-Interference Performance of Spiking Neural Network Under Network Connection Damage.网络连接受损下脉冲神经网络的抗干扰性能研究
Brain Sci. 2025 Feb 20;15(3):217. doi: 10.3390/brainsci15030217.
3
Editorial: Understanding and bridging the gap between neuromorphic computing and machine learning, volume II.社论:理解并弥合神经形态计算与机器学习之间的差距,第二卷。

本文引用的文献

1
NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps.零跳:一种基于特征图稀疏表示的灵活卷积神经网络加速器。
IEEE Trans Neural Netw Learn Syst. 2019 Mar;30(3):644-656. doi: 10.1109/TNNLS.2018.2852335. Epub 2018 Jul 26.
2
Demonstrating Hybrid Learning in a Flexible Neuromorphic Hardware System.展示在灵活的神经形态硬件系统中的混合学习。
IEEE Trans Biomed Circuits Syst. 2017 Feb;11(1):128-142. doi: 10.1109/TBCAS.2016.2579164. Epub 2016 Sep 9.
3
The Human Brain Project: Creating a European Research Infrastructure to Decode the Human Brain.
Front Comput Neurosci. 2024 Oct 3;18:1455530. doi: 10.3389/fncom.2024.1455530. eCollection 2024.
4
High-performance deep spiking neural networks with 0.3 spikes per neuron.每个神经元有0.3个脉冲的高性能深度脉冲神经网络。
Nat Commun. 2024 Aug 9;15(1):6793. doi: 10.1038/s41467-024-51110-5.
5
E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware.SpiNNaker 2上的E-prop:探索神经形态硬件上脉冲循环神经网络中的在线学习。
Front Neurosci. 2022 Nov 28;16:1018006. doi: 10.3389/fnins.2022.1018006. eCollection 2022.
6
Liquid State Machine on SpiNNaker for Spatio-Temporal Classification Tasks.用于时空分类任务的SpiNNaker上的液态机器
Front Neurosci. 2022 Mar 14;16:819063. doi: 10.3389/fnins.2022.819063. eCollection 2022.
7
Adaptive Extreme Edge Computing for Wearable Devices.适用于可穿戴设备的自适应边缘计算
Front Neurosci. 2021 May 11;15:611300. doi: 10.3389/fnins.2021.611300. eCollection 2021.
人类脑计划:创建一个解码人类大脑的欧洲研究基础设施。
Neuron. 2016 Nov 2;92(3):574-581. doi: 10.1016/j.neuron.2016.10.046.
4
Convolutional networks for fast, energy-efficient neuromorphic computing.用于快速、节能神经形态计算的卷积网络。
Proc Natl Acad Sci U S A. 2016 Oct 11;113(41):11441-11446. doi: 10.1073/pnas.1604850113. Epub 2016 Sep 20.
5
Implementation of a spike-based perceptron learning rule using TiO2-x memristors.使用TiO2-x忆阻器实现基于脉冲的感知器学习规则。
Front Neurosci. 2015 Oct 2;9:357. doi: 10.3389/fnins.2015.00357. eCollection 2015.
6
Single pairing spike-timing dependent plasticity in BiFeO3 memristors with a time window of 25 ms to 125 μs.具有25毫秒至125微秒时间窗口的BiFeO₃忆阻器中的单配对尖峰时间依赖可塑性。
Front Neurosci. 2015 Jun 30;9:227. doi: 10.3389/fnins.2015.00227. eCollection 2015.
7
Deep learning.深度学习。
Nature. 2015 May 28;521(7553):436-44. doi: 10.1038/nature14539.
8
Switched-capacitor realization of presynaptic short-term-plasticity and stop-learning synapses in 28 nm CMOS.28纳米互补金属氧化物半导体中突触前短期可塑性和停止学习突触的开关电容实现
Front Neurosci. 2015 Feb 2;9:10. doi: 10.3389/fnins.2015.00010. eCollection 2015.
9
Real-time classification and sensor fusion with a spiking deep belief network.基于尖峰神经网络的实时分类与传感器融合。
Front Neurosci. 2013 Oct 8;7:178. doi: 10.3389/fnins.2013.00178. eCollection 2013.
10
Simulation of networks of spiking neurons: a review of tools and strategies.脉冲神经元网络的模拟:工具与策略综述
J Comput Neurosci. 2007 Dec;23(3):349-98. doi: 10.1007/s10827-007-0038-6. Epub 2007 Jul 12.