• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于尖峰神经元递归网络的学习困境的解决方案。

A solution to the learning dilemma for recurrent networks of spiking neurons.

机构信息

Institute of Theoretical Computer Science, Graz University of Technology, Inffeldgasse 16b, Graz, Austria.

出版信息

Nat Commun. 2020 Jul 17;11(1):3625. doi: 10.1038/s41467-020-17236-y.

DOI:10.1038/s41467-020-17236-y
PMID:32681001
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7367848/
Abstract

Recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. Yet in spite of extensive research, how they can learn through synaptic plasticity to carry out complex network computations remains unclear. We argue that two pieces of this puzzle were provided by experimental data from neuroscience. A mathematical result tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in particular deep reinforcement learning. This learning method-called e-prop-approaches the performance of backpropagation through time (BPTT), the best-known method for training recurrent neural networks in machine learning. In addition, it suggests a method for powerful on-chip learning in energy-efficient spike-based hardware for artificial intelligence.

摘要

脉冲神经元的反复连接网络是大脑惊人的信息处理能力的基础。然而,尽管进行了广泛的研究,它们如何通过突触可塑性来进行复杂的网络计算仍然不清楚。我们认为,这个难题的两个部分是由神经科学的实验数据提供的。一个数学结果告诉我们,这两个部分需要结合起来,才能通过梯度下降实现生物上合理的在线网络学习,特别是深度强化学习。这种学习方法——e-prop——接近时间反向传播(BPTT)的性能,BPTT 是机器学习中训练递归神经网络的最著名方法。此外,它还为基于人工智能的节能基于尖峰的硬件中的强大片上学习提供了一种方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d771/7367848/d7d34cbd9549/41467_2020_17236_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d771/7367848/a4a64ce0a90d/41467_2020_17236_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d771/7367848/2669e79c178c/41467_2020_17236_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d771/7367848/db4e3276cf64/41467_2020_17236_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d771/7367848/eb08d504f804/41467_2020_17236_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d771/7367848/15f05c67f3ce/41467_2020_17236_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d771/7367848/d7d34cbd9549/41467_2020_17236_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d771/7367848/a4a64ce0a90d/41467_2020_17236_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d771/7367848/2669e79c178c/41467_2020_17236_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d771/7367848/db4e3276cf64/41467_2020_17236_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d771/7367848/eb08d504f804/41467_2020_17236_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d771/7367848/15f05c67f3ce/41467_2020_17236_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d771/7367848/d7d34cbd9549/41467_2020_17236_Fig6_HTML.jpg

相似文献

1
A solution to the learning dilemma for recurrent networks of spiking neurons.用于尖峰神经元递归网络的学习困境的解决方案。
Nat Commun. 2020 Jul 17;11(1):3625. doi: 10.1038/s41467-020-17236-y.
2
Learning in neural networks by reinforcement of irregular spiking.通过强化不规则脉冲发放实现神经网络学习。
Phys Rev E Stat Nonlin Soft Matter Phys. 2004 Apr;69(4 Pt 1):041909. doi: 10.1103/PhysRevE.69.041909. Epub 2004 Apr 30.
3
Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons.镜像脉冲时间依赖可塑性在脉冲神经元网络中实现自动编码器学习。
PLoS Comput Biol. 2015 Dec 3;11(12):e1004566. doi: 10.1371/journal.pcbi.1004566. eCollection 2015 Dec.
4
A biologically plausible supervised learning method for spiking neural networks using the symmetric STDP rule.基于对称 STDP 规则的尖峰神经网络的生物合理有监督学习方法。
Neural Netw. 2020 Jan;121:387-395. doi: 10.1016/j.neunet.2019.09.007. Epub 2019 Sep 27.
5
Introduction to spiking neural networks: Information processing, learning and applications.脉冲神经网络简介:信息处理、学习与应用
Acta Neurobiol Exp (Wars). 2011;71(4):409-33. doi: 10.55782/ane-2011-1862.
6
Target spike patterns enable efficient and biologically plausible learning for complex temporal tasks.目标尖峰模式可实现高效且在生物学上合理的复杂时间任务学习。
PLoS One. 2021 Feb 16;16(2):e0247014. doi: 10.1371/journal.pone.0247014. eCollection 2021.
7
Deep learning in spiking neural networks.深度学习在尖峰神经网络中的应用。
Neural Netw. 2019 Mar;111:47-63. doi: 10.1016/j.neunet.2018.12.002. Epub 2018 Dec 18.
8
A review of learning in biologically plausible spiking neural networks.生物启发式尖峰神经网络学习的综述。
Neural Netw. 2020 Feb;122:253-272. doi: 10.1016/j.neunet.2019.09.036. Epub 2019 Oct 11.
9
Reinforcement learning using a continuous time actor-critic framework with spiking neurons.使用具有尖峰神经元的连续时间动作 - 评论框架进行强化学习。
PLoS Comput Biol. 2013 Apr;9(4):e1003024. doi: 10.1371/journal.pcbi.1003024. Epub 2013 Apr 11.
10
Dynamic evolving spiking neural networks for on-line spatio- and spectro-temporal pattern recognition.用于在线时空谱模式识别的动态进化尖峰神经网络。
Neural Netw. 2013 May;41:188-201. doi: 10.1016/j.neunet.2012.11.014. Epub 2012 Dec 20.

引用本文的文献

1
The coming decade of digital brain research: A vision for neuroscience at the intersection of technology and computing.数字脑研究的未来十年:科技与计算交叉领域的神经科学愿景。
Imaging Neurosci (Camb). 2024 Apr 18;2. doi: 10.1162/imag_a_00137. eCollection 2024.
2
Taming the chaos gently: a predictive alignment learning rule in recurrent neural networks.温和地驯服混乱:循环神经网络中的一种预测对齐学习规则。
Nat Commun. 2025 Jul 23;16(1):6784. doi: 10.1038/s41467-025-61309-9.
3
Comparison of FORCE trained spiking and rate neural networks shows spiking networks learn slowly with noisy, cross-trial firing rates.
对FORCE训练的脉冲神经网络和速率神经网络的比较表明,脉冲神经网络在存在噪声的跨试验发放率情况下学习缓慢。
PLoS Comput Biol. 2025 Jul 21;21(7):e1013224. doi: 10.1371/journal.pcbi.1013224. eCollection 2025 Jul.
4
Global error signal guides local optimization in mismatch calculation.全局误差信号在失配计算中引导局部优化。
bioRxiv. 2025 Jul 10:2025.07.07.663505. doi: 10.1101/2025.07.07.663505.
5
Self-Contrastive Forward-Forward algorithm.自对比前向-前向算法
Nat Commun. 2025 Jul 1;16(1):5978. doi: 10.1038/s41467-025-61037-0.
6
Energy optimization induces predictive-coding properties in a multi-compartment spiking neural network model.能量优化在多房室脉冲神经网络模型中诱导预测编码特性。
PLoS Comput Biol. 2025 Jun 10;21(6):e1013112. doi: 10.1371/journal.pcbi.1013112. eCollection 2025 Jun.
7
Reinforced liquid state machines-new training strategies for spiking neural networks based on reinforcements.强化液态机器——基于强化的脉冲神经网络新训练策略
Front Comput Neurosci. 2025 May 23;19:1569374. doi: 10.3389/fncom.2025.1569374. eCollection 2025.
8
Brain-like variational inference.类脑变分推理
ArXiv. 2025 May 16:arXiv:2410.19315v2.
9
A neural implementation model of feedback-based motor learning.基于反馈的运动学习的神经实现模型。
Nat Commun. 2025 Feb 20;16(1):1805. doi: 10.1038/s41467-024-54738-5.
10
Rapid learning with phase-change memory-based in-memory computing through learning-to-learn.通过学习学习实现基于相变存储器的内存计算的快速学习。
Nat Commun. 2025 Feb 1;16(1):1243. doi: 10.1038/s41467-025-56345-4.