• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

深度连续局部学习(DECOLLE)的突触可塑性动力学

Synaptic Plasticity Dynamics for Deep Continuous Local Learning (DECOLLE).

作者信息

Kaiser Jacques, Mostafa Hesham, Neftci Emre

机构信息

FZI Research Center for Information Technology, Karlsruhe, Germany.

Department of Bioengineering, University of California, San Diego, La Jolla, CA, United States.

出版信息

Front Neurosci. 2020 May 12;14:424. doi: 10.3389/fnins.2020.00424. eCollection 2020.

DOI:10.3389/fnins.2020.00424
PMID:32477050
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7235446/
Abstract

A growing body of work underlines striking similarities between biological neural networks and recurrent, binary neural networks. A relatively smaller body of work, however, addresses the similarities between learning dynamics employed in deep artificial neural networks and synaptic plasticity in spiking neural networks. The challenge preventing this is largely caused by the discrepancy between the dynamical properties of synaptic plasticity and the requirements for gradient backpropagation. Learning algorithms that approximate gradient backpropagation using local error functions can overcome this challenge. Here, we introduce Deep Continuous Local Learning (DECOLLE), a spiking neural network equipped with local error functions for online learning with no memory overhead for computing gradients. DECOLLE is capable of learning deep spatio temporal representations from spikes relying solely on local information, making it compatible with neurobiology and neuromorphic hardware. Synaptic plasticity rules are derived systematically from user-defined cost functions and neural dynamics by leveraging existing autodifferentiation methods of machine learning frameworks. We benchmark our approach on the event-based neuromorphic dataset N-MNIST and DvsGesture, on which DECOLLE performs comparably to the state-of-the-art. DECOLLE networks provide continuously learning machines that are relevant to biology and supportive of event-based, low-power computer vision architectures matching the accuracies of conventional computers on tasks where temporal precision and speed are essential.

摘要

越来越多的研究工作强调了生物神经网络与循环二元神经网络之间惊人的相似性。然而,相对较少的研究工作涉及深度人工神经网络中使用的学习动态与脉冲神经网络中突触可塑性之间的相似性。阻碍这一研究的挑战主要是由突触可塑性的动态特性与梯度反向传播的要求之间的差异造成的。使用局部误差函数近似梯度反向传播的学习算法可以克服这一挑战。在这里,我们介绍深度连续局部学习(DECOLLE),这是一种脉冲神经网络,配备了局部误差函数用于在线学习,且计算梯度时没有内存开销。DECOLLE能够仅依靠局部信息从脉冲中学习深度时空表示,使其与神经生物学和神经形态硬件兼容。通过利用机器学习框架现有的自动微分方法,从用户定义的成本函数和神经动力学中系统地推导突触可塑性规则。我们在基于事件的神经形态数据集N-MNIST和DvsGesture上对我们的方法进行基准测试,在这些数据集上DECOLLE的表现与当前最先进的方法相当。DECOLLE网络提供了持续学习的机器,这些机器与生物学相关,并支持基于事件的低功耗计算机视觉架构,在时间精度和速度至关重要的任务上与传统计算机的精度相匹配。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36b1/7235446/ac5b7c74828a/fnins-14-00424-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36b1/7235446/058e53af2e72/fnins-14-00424-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36b1/7235446/45fabfe79fb2/fnins-14-00424-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36b1/7235446/2b9ba8f7fd88/fnins-14-00424-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36b1/7235446/0773def4b698/fnins-14-00424-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36b1/7235446/ac5b7c74828a/fnins-14-00424-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36b1/7235446/058e53af2e72/fnins-14-00424-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36b1/7235446/45fabfe79fb2/fnins-14-00424-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36b1/7235446/2b9ba8f7fd88/fnins-14-00424-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36b1/7235446/0773def4b698/fnins-14-00424-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36b1/7235446/ac5b7c74828a/fnins-14-00424-g0005.jpg

相似文献

1
Synaptic Plasticity Dynamics for Deep Continuous Local Learning (DECOLLE).深度连续局部学习(DECOLLE)的突触可塑性动力学
Front Neurosci. 2020 May 12;14:424. doi: 10.3389/fnins.2020.00424. eCollection 2020.
2
Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines.事件驱动的随机反向传播:助力神经形态深度学习机器
Front Neurosci. 2017 Jun 21;11:324. doi: 10.3389/fnins.2017.00324. eCollection 2017.
3
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training.SSTDP:用于高效脉冲神经网络训练的监督式脉冲时间依赖可塑性
Front Neurosci. 2021 Nov 4;15:756876. doi: 10.3389/fnins.2021.756876. eCollection 2021.
4
Supervised Learning in All FeFET-Based Spiking Neural Network: Opportunities and Challenges.基于全铁电场效应晶体管的脉冲神经网络中的监督学习:机遇与挑战。
Front Neurosci. 2020 Jun 24;14:634. doi: 10.3389/fnins.2020.00634. eCollection 2020.
5
Event-driven implementation of deep spiking convolutional neural networks for supervised classification using the SpiNNaker neuromorphic platform.基于 SpiNNaker 神经形态平台的用于监督分类的深度尖峰卷积神经网络的事件驱动实现。
Neural Netw. 2020 Jan;121:319-328. doi: 10.1016/j.neunet.2019.09.008. Epub 2019 Sep 24.
6
Rectified Linear Postsynaptic Potential Function for Backpropagation in Deep Spiking Neural Networks.深度脉冲神经网络中用于反向传播的整流线性突触后电位函数
IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):1947-1958. doi: 10.1109/TNNLS.2021.3110991. Epub 2022 May 2.
7
Is Neuromorphic MNIST Neuromorphic? Analyzing the Discriminative Power of Neuromorphic Datasets in the Time Domain.神经形态MNIST数据集是神经形态的吗?分析神经形态数据集在时域中的判别能力。
Front Neurosci. 2021 Mar 25;15:608567. doi: 10.3389/fnins.2021.608567. eCollection 2021.
8
Surrogate gradients for analog neuromorphic computing.模拟神经形态计算的替代梯度。
Proc Natl Acad Sci U S A. 2022 Jan 25;119(4). doi: 10.1073/pnas.2109194119.
9
Memristors for Neuromorphic Circuits and Artificial Intelligence Applications.用于神经形态电路和人工智能应用的忆阻器
Materials (Basel). 2020 Feb 20;13(4):938. doi: 10.3390/ma13040938.
10
Neuromorphic Sentiment Analysis Using Spiking Neural Networks.基于尖峰神经网络的神经形态情绪分析。
Sensors (Basel). 2023 Sep 6;23(18):7701. doi: 10.3390/s23187701.

引用本文的文献

1
Scalable network emulation on analog neuromorphic hardware.模拟神经形态硬件上的可扩展网络仿真。
Front Neurosci. 2025 Feb 5;18:1523331. doi: 10.3389/fnins.2024.1523331. eCollection 2024.
2
Fourier or Wavelet bases as counterpart self-attention in spikformer for efficient visual classification.傅里叶或小波基作为Spikformer中对应自注意力机制用于高效视觉分类。
Front Neurosci. 2025 Jan 29;18:1516868. doi: 10.3389/fnins.2024.1516868. eCollection 2024.
3
Artificial Intelligence and Neuroscience: Transformative Synergies in Brain Research and Clinical Applications.

本文引用的文献

1
Is Neuromorphic MNIST Neuromorphic? Analyzing the Discriminative Power of Neuromorphic Datasets in the Time Domain.神经形态MNIST数据集是神经形态的吗?分析神经形态数据集在时域中的判别能力。
Front Neurosci. 2021 Mar 25;15:608567. doi: 10.3389/fnins.2021.608567. eCollection 2021.
2
Deep Supervised Learning Using Local Errors.使用局部误差的深度监督学习
Front Neurosci. 2018 Aug 31;12:608. doi: 10.3389/fnins.2018.00608. eCollection 2018.
3
SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks.超级脉冲:多层脉冲神经网络中的监督学习
人工智能与神经科学:脑研究及临床应用中的变革性协同作用
J Clin Med. 2025 Jan 16;14(2):550. doi: 10.3390/jcm14020550.
4
Neural reshaping: the plasticity of human brain and artificial intelligence in the learning process.神经重塑:人类大脑与人工智能在学习过程中的可塑性
Am J Neurodegener Dis. 2024 Dec 25;13(5):34-48. doi: 10.62347/NHKD7661. eCollection 2024.
5
Spike-HAR++: an energy-efficient and lightweight parallel spiking transformer for event-based human action recognition.Spike-HAR++:一种用于基于事件的人类动作识别的高效节能轻量级并行脉冲变压器。
Front Comput Neurosci. 2024 Nov 26;18:1508297. doi: 10.3389/fncom.2024.1508297. eCollection 2024.
6
Exploring neural oscillations during speech perception via surrogate gradient spiking neural networks.通过替代梯度脉冲神经网络探索语音感知过程中的神经振荡。
Front Neurosci. 2024 Sep 25;18:1449181. doi: 10.3389/fnins.2024.1449181. eCollection 2024.
7
Auto-Spikformer: Spikformer architecture search.自动Spikformer:Spikformer架构搜索
Front Neurosci. 2024 Jul 23;18:1372257. doi: 10.3389/fnins.2024.1372257. eCollection 2024.
8
Emergence of brain-inspired small-world spiking neural network through neuroevolution.通过神经进化产生受大脑启发的小世界脉冲神经网络。
iScience. 2024 Jan 9;27(2):108845. doi: 10.1016/j.isci.2024.108845. eCollection 2024 Feb 16.
9
SHIP: a computational framework for simulating and validating novel technologies in hardware spiking neural networks.SHIP:一种用于模拟和验证硬件脉冲神经网络中的新技术的计算框架。
Front Neurosci. 2024 Jan 8;17:1270090. doi: 10.3389/fnins.2023.1270090. eCollection 2023.
10
Learnable Leakage and Onset-Spiking Self-Attention in SNNs with Local Error Signals.具有局部误差信号的脉冲神经网络中的可学习泄漏和起始尖峰自注意力
Sensors (Basel). 2023 Dec 12;23(24):9781. doi: 10.3390/s23249781.
Neural Comput. 2018 Jun;30(6):1514-1541. doi: 10.1162/neco_a_01086. Epub 2018 Apr 13.
4
Learning in the machine: The symmetries of the deep learning channel.机器学习:深度学习通道的对称性。
Neural Netw. 2017 Nov;95:110-133. doi: 10.1016/j.neunet.2017.08.008. Epub 2017 Sep 5.
5
Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines.事件驱动的随机反向传播:助力神经形态深度学习机器
Front Neurosci. 2017 Jun 21;11:324. doi: 10.3389/fnins.2017.00324. eCollection 2017.
6
Training Deep Spiking Neural Networks Using Backpropagation.使用反向传播训练深度脉冲神经网络。
Front Neurosci. 2016 Nov 8;10:508. doi: 10.3389/fnins.2016.00508. eCollection 2016.
7
Random synaptic feedback weights support error backpropagation for deep learning.随机突触反馈权重支持深度学习的误差反向传播。
Nat Commun. 2016 Nov 8;7:13276. doi: 10.1038/ncomms13276.
8
Convolutional networks for fast, energy-efficient neuromorphic computing.用于快速、节能神经形态计算的卷积网络。
Proc Natl Acad Sci U S A. 2016 Oct 11;113(41):11441-11446. doi: 10.1073/pnas.1604850113. Epub 2016 Sep 20.
9
Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades.利用扫视将静态图像数据集转换为脉冲神经形态数据集
Front Neurosci. 2015 Nov 16;9:437. doi: 10.3389/fnins.2015.00437. eCollection 2015.
10
Artificial brains. A million spiking-neuron integrated circuit with a scalable communication network and interface.人工大脑。具有可扩展通信网络和接口的 100 万个尖峰神经元集成电路。
Science. 2014 Aug 8;345(6197):668-73. doi: 10.1126/science.1254642. Epub 2014 Aug 7.