Suppr超能文献

事件驱动的随机反向传播:助力神经形态深度学习机器

Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines.

作者信息

Neftci Emre O, Augustine Charles, Paul Somnath, Detorakis Georgios

机构信息

Neuromorphic Machine Intelligence Laboratory, Department of Cognitive Sciences, University of California, IrvineIrvine, CA, United States.

Circuit Research Lab, Intel CorporationHilsboro, OR, United States.

出版信息

Front Neurosci. 2017 Jun 21;11:324. doi: 10.3389/fnins.2017.00324. eCollection 2017.

Abstract

An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.

摘要

神经形态计算中一个持续存在的挑战是设计出通用且计算高效的推理和学习模型,这些模型要与大脑的空间和时间限制相兼容。一种越来越流行且成功的方法是从深度神经网络中使用的推理和学习算法中获取灵感。然而,深度学习的主力算法——梯度下降反向传播(BP)规则,在学习过程中通常依赖于高精度内存中存储的全网络信息的即时可用性,以及在神经形态硬件中难以实现的精确运算。值得注意的是,最近的研究表明,精确的反向传播梯度对于学习深度表征并非必不可少。基于这些结果,我们展示了一种事件驱动的随机BP(eRBP)规则,该规则使用误差调制的突触可塑性来学习深度表征。使用双室泄漏积分发放(I&F)神经元,该规则对于每个突触权重仅需要一次加法和两次比较,这使其非常适合在数字或混合信号神经形态硬件中实现。我们的结果表明,使用eRBP可以快速学习深度表征,在排列不变数据集上实现的分类准确率与在GPU上进行的人工神经网络模拟所获得的准确率相当,同时在学习过程中对神经元和突触状态量化具有鲁棒性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bda4/5478701/5bd180eb3649/fnins-11-00324-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验