• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

深度神经网络中的在线时空学习

Online Spatio-Temporal Learning in Deep Neural Networks.

作者信息

Bohnstingl Thomas, Wozniak Stanislaw, Pantazi Angeliki, Eleftheriou Evangelos

出版信息

IEEE Trans Neural Netw Learn Syst. 2023 Nov;34(11):8894-8908. doi: 10.1109/TNNLS.2022.3153985. Epub 2023 Oct 27.

DOI:10.1109/TNNLS.2022.3153985
PMID:35294357
Abstract

Biological neural networks are equipped with an inherent capability to continuously adapt through online learning. This aspect remains in stark contrast to learning with error backpropagation through time (BPTT) that involves offline computation of the gradients due to the need to unroll the network through time. Here, we present an alternative online learning algorithm ic framework for deep recurrent neural networks (RNNs) and spiking neural networks (SNNs), called online spatio-temporal learning (OSTL). It is based on insights from biology and proposes the clear separation of spatial and temporal gradient components. For shallow SNNs, OSTL is gradient equivalent to BPTT enabling for the first time online training of SNNs with BPTT-equivalent gradients. In addition, the proposed formulation unveils a class of SNN architectures trainable online at low time complexity. Moreover, we extend OSTL to a generic form, applicable to a wide range of network architectures, including networks comprising long short-term memory (LSTM) and gated recurrent units (GRUs). We demonstrate the operation of our algorithm ic framework on various tasks from language modeling to speech recognition and obtain results on par with the BPTT baselines.

摘要

生物神经网络具备通过在线学习持续适应的内在能力。这一点与通过时间反向传播误差(BPTT)学习形成鲜明对比,BPTT由于需要随时间展开网络而涉及梯度的离线计算。在此,我们提出一种用于深度循环神经网络(RNN)和脉冲神经网络(SNN)的在线学习算法框架,称为在线时空学习(OSTL)。它基于生物学见解,提出将空间和时间梯度分量明确分离。对于浅层SNN,OSTL在梯度上等同于BPTT,首次实现了具有BPTT等效梯度的SNN在线训练。此外,所提出的公式揭示了一类可在低时间复杂度下在线训练的SNN架构。而且,我们将OSTL扩展为通用形式,适用于广泛的网络架构,包括包含长短期记忆(LSTM)和门控循环单元(GRU)的网络。我们在从语言建模到语音识别等各种任务上展示了我们算法框架的运行情况,并获得了与BPTT基线相当的结果。

相似文献

1
Online Spatio-Temporal Learning in Deep Neural Networks.深度神经网络中的在线时空学习
IEEE Trans Neural Netw Learn Syst. 2023 Nov;34(11):8894-8908. doi: 10.1109/TNNLS.2022.3153985. Epub 2023 Oct 27.
2
Comparing SNNs and RNNs on neuromorphic vision datasets: Similarities and differences.在神经形态视觉数据集上比较 SNNs 和 RNNs:相似性和差异。
Neural Netw. 2020 Dec;132:108-120. doi: 10.1016/j.neunet.2020.08.001. Epub 2020 Aug 17.
3
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training.SSTDP:用于高效脉冲神经网络训练的监督式脉冲时间依赖可塑性
Front Neurosci. 2021 Nov 4;15:756876. doi: 10.3389/fnins.2021.756876. eCollection 2021.
4
Braille letter reading: A benchmark for spatio-temporal pattern recognition on neuromorphic hardware.盲文阅读:神经形态硬件上时空模式识别的一个基准。
Front Neurosci. 2022 Nov 11;16:951164. doi: 10.3389/fnins.2022.951164. eCollection 2022.
5
Enabling Spike-Based Backpropagation for Training Deep Neural Network Architectures.实现基于尖峰的反向传播以训练深度神经网络架构。
Front Neurosci. 2020 Feb 28;14:119. doi: 10.3389/fnins.2020.00119. eCollection 2020.
6
EXODUS: Stable and efficient training of spiking neural networks.《出埃及记》:脉冲神经网络的稳定高效训练
Front Neurosci. 2023 Feb 8;17:1110444. doi: 10.3389/fnins.2023.1110444. eCollection 2023.
7
Efficient training of spiking neural networks with temporally-truncated local backpropagation through time.通过时间上截断的局部反向传播对脉冲神经网络进行高效训练。
Front Neurosci. 2023 Apr 6;17:1047008. doi: 10.3389/fnins.2023.1047008. eCollection 2023.
8
Gradient-free training of recurrent neural networks using random perturbations.使用随机扰动对循环神经网络进行无梯度训练。
Front Neurosci. 2024 Jul 10;18:1439155. doi: 10.3389/fnins.2024.1439155. eCollection 2024.
9
Toward robust and scalable deep spiking reinforcement learning.迈向稳健且可扩展的深度脉冲强化学习。
Front Neurorobot. 2023 Jan 20;16:1075647. doi: 10.3389/fnbot.2022.1075647. eCollection 2022.
10
Spatio-Temporal Backpropagation for Training High-Performance Spiking Neural Networks.用于训练高性能脉冲神经网络的时空反向传播
Front Neurosci. 2018 May 23;12:331. doi: 10.3389/fnins.2018.00331. eCollection 2018.

引用本文的文献

1
Rapid learning with phase-change memory-based in-memory computing through learning-to-learn.通过学习学习实现基于相变存储器的内存计算的快速学习。
Nat Commun. 2025 Feb 1;16(1):1243. doi: 10.1038/s41467-025-56345-4.
2
Machine unlearning in brain-inspired neural network paradigms.受大脑启发的神经网络范式中的机器遗忘学习
Front Neurorobot. 2024 May 21;18:1361577. doi: 10.3389/fnbot.2024.1361577. eCollection 2024.
3
SENECA: building a fully digital neuromorphic processor, design trade-offs and challenges.塞内卡:构建全数字神经形态处理器、设计权衡与挑战。
Front Neurosci. 2023 Jun 23;17:1187252. doi: 10.3389/fnins.2023.1187252. eCollection 2023.
4
E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware.SpiNNaker 2上的E-prop:探索神经形态硬件上脉冲循环神经网络中的在线学习。
Front Neurosci. 2022 Nov 28;16:1018006. doi: 10.3389/fnins.2022.1018006. eCollection 2022.