• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

元尖峰传播胺:在脉冲神经网络中通过突触可塑性进行学习学习

Meta-SpikePropamine: learning to learn with synaptic plasticity in spiking neural networks.

作者信息

Schmidgall Samuel, Hays Joe

机构信息

U.S. Naval Research Laboratory, Spacecraft Engineering Department, Washington, DC, United States.

Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, United States.

出版信息

Front Neurosci. 2023 May 12;17:1183321. doi: 10.3389/fnins.2023.1183321. eCollection 2023.

DOI:10.3389/fnins.2023.1183321
PMID:37250397
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10213417/
Abstract

We propose that in order to harness our understanding of neuroscience toward machine learning, we must first have powerful tools for training brain-like models of learning. Although substantial progress has been made toward understanding the dynamics of learning in the brain, models of learning have yet to demonstrate the same performance capabilities as methods in deep learning such as gradient descent. Inspired by the successes of machine learning using gradient descent, we introduce a bi-level optimization framework that seeks to both solve online learning tasks and improve the ability to learn online using models of plasticity from neuroscience. We demonstrate that models of three-factor learning with synaptic plasticity taken from the neuroscience literature can be trained in Spiking Neural Networks (SNNs) with gradient descent via a framework of learning-to-learn to address challenging learning problems. This framework opens a new path toward developing neuroscience inspired online learning algorithms.

摘要

我们认为,为了将我们对神经科学的理解应用于机器学习,我们首先必须拥有强大的工具来训练类似大脑的学习模型。尽管在理解大脑学习动态方面已经取得了重大进展,但学习模型尚未展现出与深度学习方法(如梯度下降)相同的性能。受使用梯度下降的机器学习成功案例启发,我们引入了一个双层优化框架,该框架旨在既解决在线学习任务,又利用神经科学中的可塑性模型提高在线学习能力。我们证明,通过一个学习如何学习的框架,利用梯度下降可以在脉冲神经网络(SNN)中训练从神经科学文献中获取的具有突触可塑性的三因素学习模型,以解决具有挑战性的学习问题。这个框架为开发受神经科学启发的在线学习算法开辟了一条新路径。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6ba3/10213417/1520b0db1a82/fnins-17-1183321-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6ba3/10213417/39b7495fe49a/fnins-17-1183321-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6ba3/10213417/19f332debebb/fnins-17-1183321-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6ba3/10213417/a884ee06f218/fnins-17-1183321-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6ba3/10213417/b5b6960f7110/fnins-17-1183321-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6ba3/10213417/8003d2908308/fnins-17-1183321-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6ba3/10213417/1520b0db1a82/fnins-17-1183321-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6ba3/10213417/39b7495fe49a/fnins-17-1183321-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6ba3/10213417/19f332debebb/fnins-17-1183321-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6ba3/10213417/a884ee06f218/fnins-17-1183321-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6ba3/10213417/b5b6960f7110/fnins-17-1183321-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6ba3/10213417/8003d2908308/fnins-17-1183321-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6ba3/10213417/1520b0db1a82/fnins-17-1183321-g0006.jpg

相似文献

1
Meta-SpikePropamine: learning to learn with synaptic plasticity in spiking neural networks.元尖峰传播胺:在脉冲神经网络中通过突触可塑性进行学习学习
Front Neurosci. 2023 May 12;17:1183321. doi: 10.3389/fnins.2023.1183321. eCollection 2023.
2
SpikePropamine: Differentiable Plasticity in Spiking Neural Networks.尖峰传播胺:脉冲神经网络中的可微可塑性。
Front Neurorobot. 2021 Sep 22;15:629210. doi: 10.3389/fnbot.2021.629210. eCollection 2021.
3
HybridSNN: Combining Bio-Machine Strengths by Boosting Adaptive Spiking Neural Networks.HybridSNN:通过提升自适应尖峰神经网络来结合生物机器的优势。
IEEE Trans Neural Netw Learn Syst. 2023 Sep;34(9):5841-5855. doi: 10.1109/TNNLS.2021.3131356. Epub 2023 Sep 1.
4
A solution to the learning dilemma for recurrent networks of spiking neurons.用于尖峰神经元递归网络的学习困境的解决方案。
Nat Commun. 2020 Jul 17;11(1):3625. doi: 10.1038/s41467-020-17236-y.
5
A biologically plausible supervised learning method for spiking neural networks using the symmetric STDP rule.基于对称 STDP 规则的尖峰神经网络的生物合理有监督学习方法。
Neural Netw. 2020 Jan;121:387-395. doi: 10.1016/j.neunet.2019.09.007. Epub 2019 Sep 27.
6
Evolving interpretable plasticity for spiking networks.用于脉冲神经网络的不断发展的可解释可塑性。
Elife. 2021 Oct 28;10:e66273. doi: 10.7554/eLife.66273.
7
Locally connected spiking neural networks for unsupervised feature learning.用于无监督特征学习的局部连接脉冲神经网络。
Neural Netw. 2019 Nov;119:332-340. doi: 10.1016/j.neunet.2019.08.016. Epub 2019 Aug 26.
8
Developmental Plasticity-Inspired Adaptive Pruning for Deep Spiking and Artificial Neural Networks.受发育可塑性启发的深度脉冲神经网络和人工神经网络自适应剪枝
IEEE Trans Pattern Anal Mach Intell. 2025 Jan;47(1):240-251. doi: 10.1109/TPAMI.2024.3467268. Epub 2024 Dec 4.
9
Few-Shot Learning in Spiking Neural Networks by Multi-Timescale Optimization.基于多时间尺度优化的尖峰神经网络少样本学习。
Neural Comput. 2021 Aug 19;33(9):2439-2472. doi: 10.1162/neco_a_01423.
10
A supervised multi-spike learning algorithm based on gradient descent for spiking neural networks.基于梯度下降的监督多尖峰学习算法在尖峰神经网络中的应用。
Neural Netw. 2013 Jul;43:99-113. doi: 10.1016/j.neunet.2013.02.003. Epub 2013 Feb 16.

引用本文的文献

1
Neuromorphic computing for robotic vision: algorithms to hardware advances.用于机器人视觉的神经形态计算:从算法到硬件的进展
Commun Eng. 2025 Aug 13;4(1):152. doi: 10.1038/s44172-025-00492-5.
2
Biologically plausible gated recurrent neural networks for working memory and learning-to-learn.用于工作记忆和学习学习的具有生物学合理性的门控循环神经网络。
PLoS One. 2024 Dec 31;19(12):e0316453. doi: 10.1371/journal.pone.0316453. eCollection 2024.

本文引用的文献

1
The BrainScaleS-2 Accelerated Neuromorphic System With Hybrid Plasticity.具有混合可塑性的BrainScaleS-2加速神经形态系统
Front Neurosci. 2022 Feb 24;16:795876. doi: 10.3389/fnins.2022.795876. eCollection 2022.
2
Cell-type-specific neuromodulation guides synaptic credit assignment in a spiking neural network.细胞类型特异性神经调节指导脉冲神经网络中的突触信用分配。
Proc Natl Acad Sci U S A. 2021 Dec 21;118(51). doi: 10.1073/pnas.2111821118.
3
Evolving interpretable plasticity for spiking networks.用于脉冲神经网络的不断发展的可解释可塑性。
Elife. 2021 Oct 28;10:e66273. doi: 10.7554/eLife.66273.
4
SpikePropamine: Differentiable Plasticity in Spiking Neural Networks.尖峰传播胺:脉冲神经网络中的可微可塑性。
Front Neurorobot. 2021 Sep 22;15:629210. doi: 10.3389/fnbot.2021.629210. eCollection 2021.
5
Meta-Learning in Neural Networks: A Survey.元学习在神经网络中的研究进展综述
IEEE Trans Pattern Anal Mach Intell. 2022 Sep;44(9):5149-5169. doi: 10.1109/TPAMI.2021.3079209. Epub 2022 Aug 4.
6
Modeling Working Memory in a Spiking Neuron Network Accompanied by Astrocytes.在伴有星形胶质细胞的脉冲神经元网络中对工作记忆进行建模。
Front Cell Neurosci. 2021 Mar 31;15:631485. doi: 10.3389/fncel.2021.631485. eCollection 2021.
7
Dopamine: The Neuromodulator of Long-Term Synaptic Plasticity, Reward and Movement Control.多巴胺:长时程突触可塑性、奖励和运动控制的神经调质。
Cells. 2021 Mar 26;10(4):735. doi: 10.3390/cells10040735.
8
A Continual Learning Survey: Defying Forgetting in Classification Tasks.持续学习调查:在分类任务中对抗遗忘
IEEE Trans Pattern Anal Mach Intell. 2022 Jul;44(7):3366-3385. doi: 10.1109/TPAMI.2021.3057446. Epub 2022 Jun 3.
9
The Remarkable Robustness of Surrogate Gradient Learning for Instilling Complex Function in Spiking Neural Networks.尖峰神经网络中复杂功能的代理梯度学习的显著稳健性。
Neural Comput. 2021 Mar 26;33(4):899-925. doi: 10.1162/neco_a_01367.
10
A solution to the learning dilemma for recurrent networks of spiking neurons.用于尖峰神经元递归网络的学习困境的解决方案。
Nat Commun. 2020 Jul 17;11(1):3625. doi: 10.1038/s41467-020-17236-y.