• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

事件驱动的随机反向传播:助力神经形态深度学习机器

Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines.

作者信息

Neftci Emre O, Augustine Charles, Paul Somnath, Detorakis Georgios

机构信息

Neuromorphic Machine Intelligence Laboratory, Department of Cognitive Sciences, University of California, IrvineIrvine, CA, United States.

Circuit Research Lab, Intel CorporationHilsboro, OR, United States.

出版信息

Front Neurosci. 2017 Jun 21;11:324. doi: 10.3389/fnins.2017.00324. eCollection 2017.

DOI:10.3389/fnins.2017.00324
PMID:28680387
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC5478701/
Abstract

An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.

摘要

神经形态计算中一个持续存在的挑战是设计出通用且计算高效的推理和学习模型,这些模型要与大脑的空间和时间限制相兼容。一种越来越流行且成功的方法是从深度神经网络中使用的推理和学习算法中获取灵感。然而,深度学习的主力算法——梯度下降反向传播(BP)规则,在学习过程中通常依赖于高精度内存中存储的全网络信息的即时可用性,以及在神经形态硬件中难以实现的精确运算。值得注意的是,最近的研究表明,精确的反向传播梯度对于学习深度表征并非必不可少。基于这些结果,我们展示了一种事件驱动的随机BP(eRBP)规则,该规则使用误差调制的突触可塑性来学习深度表征。使用双室泄漏积分发放(I&F)神经元,该规则对于每个突触权重仅需要一次加法和两次比较,这使其非常适合在数字或混合信号神经形态硬件中实现。我们的结果表明,使用eRBP可以快速学习深度表征,在排列不变数据集上实现的分类准确率与在GPU上进行的人工神经网络模拟所获得的准确率相当,同时在学习过程中对神经元和突触状态量化具有鲁棒性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bda4/5478701/233f9fa38e59/fnins-11-00324-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bda4/5478701/5bd180eb3649/fnins-11-00324-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bda4/5478701/489238ff49cb/fnins-11-00324-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bda4/5478701/03bd05d8b0e4/fnins-11-00324-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bda4/5478701/5002549897d5/fnins-11-00324-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bda4/5478701/4be4389d901f/fnins-11-00324-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bda4/5478701/4bf1ed7549cf/fnins-11-00324-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bda4/5478701/233f9fa38e59/fnins-11-00324-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bda4/5478701/5bd180eb3649/fnins-11-00324-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bda4/5478701/489238ff49cb/fnins-11-00324-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bda4/5478701/03bd05d8b0e4/fnins-11-00324-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bda4/5478701/5002549897d5/fnins-11-00324-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bda4/5478701/4be4389d901f/fnins-11-00324-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bda4/5478701/4bf1ed7549cf/fnins-11-00324-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bda4/5478701/233f9fa38e59/fnins-11-00324-g0007.jpg

相似文献

1
Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines.事件驱动的随机反向传播:助力神经形态深度学习机器
Front Neurosci. 2017 Jun 21;11:324. doi: 10.3389/fnins.2017.00324. eCollection 2017.
2
Efficient Spike-Driven Learning With Dendritic Event-Based Processing.基于树突事件处理的高效尖峰驱动学习
Front Neurosci. 2021 Feb 19;15:601109. doi: 10.3389/fnins.2021.601109. eCollection 2021.
3
Synaptic Plasticity Dynamics for Deep Continuous Local Learning (DECOLLE).深度连续局部学习(DECOLLE)的突触可塑性动力学
Front Neurosci. 2020 May 12;14:424. doi: 10.3389/fnins.2020.00424. eCollection 2020.
4
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training.SSTDP:用于高效脉冲神经网络训练的监督式脉冲时间依赖可塑性
Front Neurosci. 2021 Nov 4;15:756876. doi: 10.3389/fnins.2021.756876. eCollection 2021.
5
Neural and Synaptic Array Transceiver: A Brain-Inspired Computing Framework for Embedded Learning.神经与突触阵列收发器:一种用于嵌入式学习的受大脑启发的计算框架。
Front Neurosci. 2018 Aug 29;12:583. doi: 10.3389/fnins.2018.00583. eCollection 2018.
6
Information bottleneck-based Hebbian learning rule naturally ties working memory and synaptic updates.基于信息瓶颈的赫布学习规则自然地将工作记忆与突触更新联系起来。
Front Comput Neurosci. 2024 May 16;18:1240348. doi: 10.3389/fncom.2024.1240348. eCollection 2024.
7
Rectified Linear Postsynaptic Potential Function for Backpropagation in Deep Spiking Neural Networks.深度脉冲神经网络中用于反向传播的整流线性突触后电位函数
IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):1947-1958. doi: 10.1109/TNNLS.2021.3110991. Epub 2022 May 2.
8
Hardware implementation of backpropagation using progressive gradient descent for in situ training of multilayer neural networks.使用渐进梯度下降进行多层神经网络原位训练的反向传播的硬件实现。
Sci Adv. 2024 Jul 12;10(28):eado8999. doi: 10.1126/sciadv.ado8999.
9
Supervised Learning in All FeFET-Based Spiking Neural Network: Opportunities and Challenges.基于全铁电场效应晶体管的脉冲神经网络中的监督学习:机遇与挑战。
Front Neurosci. 2020 Jun 24;14:634. doi: 10.3389/fnins.2020.00634. eCollection 2020.
10
Spiking CMOS-NVM mixed-signal neuromorphic ConvNet with circuit- and training-optimized temporal subsampling.具有电路和训练优化时间下采样的尖峰CMOS-NVM混合信号神经形态卷积网络
Front Neurosci. 2023 Jul 18;17:1177592. doi: 10.3389/fnins.2023.1177592. eCollection 2023.

引用本文的文献

1
Self-Contrastive Forward-Forward algorithm.自对比前向-前向算法
Nat Commun. 2025 Jul 1;16(1):5978. doi: 10.1038/s41467-025-61037-0.
2
SpikeAtConv: an integrated spiking-convolutional attention architecture for energy-efficient neuromorphic vision processing.SpikeAtConv:一种用于节能神经形态视觉处理的集成脉冲卷积注意力架构。
Front Neurosci. 2025 Mar 12;19:1536771. doi: 10.3389/fnins.2025.1536771. eCollection 2025.
3
Paired competing neurons improving STDP supervised local learning in Spiking Neural Networks.配对竞争神经元改善脉冲神经网络中基于STDP的监督局部学习

本文引用的文献

1
Learning in the Machine: Random Backpropagation and the Deep Learning Channel.机器中的学习:随机反向传播与深度学习通道
Artif Intell. 2018 Jul;260:1-35. doi: 10.1016/j.artint.2018.03.003. Epub 2018 Apr 3.
2
Supervised Learning Based on Temporal Coding in Spiking Neural Networks.基于脉冲神经网络中时间编码的监督学习。
IEEE Trans Neural Netw Learn Syst. 2018 Jul;29(7):3227-3235. doi: 10.1109/TNNLS.2017.2726060. Epub 2017 Aug 1.
3
Deep Learning with Dynamic Spiking Neurons and Fixed Feedback Weights.具有动态脉冲神经元和固定反馈权重的深度学习
Front Neurosci. 2024 Jul 24;18:1401690. doi: 10.3389/fnins.2024.1401690. eCollection 2024.
4
Mapless mobile robot navigation at the edge using self-supervised cognitive map learners.利用自监督认知地图学习器在边缘进行无地图移动机器人导航。
Front Robot AI. 2024 May 22;11:1372375. doi: 10.3389/frobt.2024.1372375. eCollection 2024.
5
Training multi-layer spiking neural networks with plastic synaptic weights and delays.使用可塑性突触权重和延迟训练多层脉冲神经网络。
Front Neurosci. 2024 Jan 24;17:1253830. doi: 10.3389/fnins.2023.1253830. eCollection 2023.
6
Sparse-firing regularization methods for spiking neural networks with time-to-first-spike coding.用于具有首次放电时间编码的脉冲神经网络的稀疏放电正则化方法。
Sci Rep. 2023 Dec 21;13(1):22897. doi: 10.1038/s41598-023-50201-5.
7
Structural plasticity for neuromorphic networks with electropolymerized dendritic PEDOT connections.具有电聚合树枝状聚3,4-乙撑二氧噻吩连接的神经形态网络的结构可塑性。
Nat Commun. 2023 Dec 8;14(1):8143. doi: 10.1038/s41467-023-43887-8.
8
Prediction of SMILE surgical cutting formula based on back propagation neural network.基于反向传播神经网络的SMILE手术切削公式预测
Int J Ophthalmol. 2023 Sep 18;16(9):1424-1430. doi: 10.18240/ijo.2023.09.08. eCollection 2023.
9
Spiking CMOS-NVM mixed-signal neuromorphic ConvNet with circuit- and training-optimized temporal subsampling.具有电路和训练优化时间下采样的尖峰CMOS-NVM混合信号神经形态卷积网络
Front Neurosci. 2023 Jul 18;17:1177592. doi: 10.3389/fnins.2023.1177592. eCollection 2023.
10
Neural spiking for causal inference and learning.神经尖峰用于因果推理和学习。
PLoS Comput Biol. 2023 Apr 4;19(4):e1011005. doi: 10.1371/journal.pcbi.1011005. eCollection 2023 Apr.
Neural Comput. 2017 Mar;29(3):578-602. doi: 10.1162/NECO_a_00929. Epub 2017 Jan 17.
4
Training Deep Spiking Neural Networks Using Backpropagation.使用反向传播训练深度脉冲神经网络。
Front Neurosci. 2016 Nov 8;10:508. doi: 10.3389/fnins.2016.00508. eCollection 2016.
5
Stochastic inference with spiking neurons in the high-conductance state.高电导状态下脉冲神经元的随机推理。
Phys Rev E. 2016 Oct;94(4-1):042312. doi: 10.1103/PhysRevE.94.042312. Epub 2016 Oct 20.
6
Random synaptic feedback weights support error backpropagation for deep learning.随机突触反馈权重支持深度学习的误差反向传播。
Nat Commun. 2016 Nov 8;7:13276. doi: 10.1038/ncomms13276.
7
Convolutional networks for fast, energy-efficient neuromorphic computing.用于快速、节能神经形态计算的卷积网络。
Proc Natl Acad Sci U S A. 2016 Oct 11;113(41):11441-11446. doi: 10.1073/pnas.1604850113. Epub 2016 Sep 20.
8
Energy-Efficient Neuromorphic Classifiers.节能神经形态分类器
Neural Comput. 2016 Oct;28(10):2011-44. doi: 10.1162/NECO_a_00882. Epub 2016 Aug 24.
9
Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines.随机突触助力高效受脑启发的学习机器。
Front Neurosci. 2016 Jun 29;10:241. doi: 10.3389/fnins.2016.00241. eCollection 2016.
10
What Learning Systems do Intelligent Agents Need? Complementary Learning Systems Theory Updated.智能体需要什么样的学习系统?更新后的补充学习系统理论。
Trends Cogn Sci. 2016 Jul;20(7):512-534. doi: 10.1016/j.tics.2016.05.004.