Suppr超能文献

用尖峰神经网络匹配序列学习中的回忆和存储。

Matching recall and storage in sequence learning with spiking neural networks.

机构信息

Department of Physiology, and Center for Cognition, Learning, and Memory, University of Bern, CH-3012 Bern, Switzerland.

出版信息

J Neurosci. 2013 Jun 5;33(23):9565-75. doi: 10.1523/JNEUROSCI.4098-12.2013.

Abstract

Storing and recalling spiking sequences is a general problem the brain needs to solve. It is, however, unclear what type of biologically plausible learning rule is suited to learn a wide class of spatiotemporal activity patterns in a robust way. Here we consider a recurrent network of stochastic spiking neurons composed of both visible and hidden neurons. We derive a generic learning rule that is matched to the neural dynamics by minimizing an upper bound on the Kullback-Leibler divergence from the target distribution to the model distribution. The derived learning rule is consistent with spike-timing dependent plasticity in that a presynaptic spike preceding a postsynaptic spike elicits potentiation while otherwise depression emerges. Furthermore, the learning rule for synapses that target visible neurons can be matched to the recently proposed voltage-triplet rule. The learning rule for synapses that target hidden neurons is modulated by a global factor, which shares properties with astrocytes and gives rise to testable predictions.

摘要

存储和回忆尖峰序列是大脑需要解决的一个普遍问题。然而,目前还不清楚哪种具有生物学合理性的学习规则适合以稳健的方式学习广泛的时空活动模式。在这里,我们考虑了一个由可见神经元和隐藏神经元组成的随机尖峰神经元的递归网络。我们通过最小化目标分布与模型分布之间的 Kullback-Leibler 散度的上限,推导出了一个与神经动力学相匹配的通用学习规则。推导出的学习规则与尖峰时间依赖可塑性一致,即前一个突触的尖峰在随后的突触的尖峰之前引发增强,否则会出现抑制。此外,针对可见神经元的突触的学习规则可以与最近提出的电压三联体规则相匹配。针对隐藏神经元的突触的学习规则由一个全局因素调制,该因素与星形胶质细胞具有相同的特性,并产生可测试的预测。

相似文献

1
Matching recall and storage in sequence learning with spiking neural networks.
J Neurosci. 2013 Jun 5;33(23):9565-75. doi: 10.1523/JNEUROSCI.4098-12.2013.
2
Reinforcement Learning in Spiking Neural Networks with Stochastic and Deterministic Synapses.
Neural Comput. 2019 Dec;31(12):2368-2389. doi: 10.1162/neco_a_01238. Epub 2019 Oct 15.
3
Reinforcement learning through modulation of spike-timing-dependent synaptic plasticity.
Neural Comput. 2007 Jun;19(6):1468-502. doi: 10.1162/neco.2007.19.6.1468.
4
Stochastic variational learning in recurrent spiking networks.
Front Comput Neurosci. 2014 Apr 4;8:38. doi: 10.3389/fncom.2014.00038. eCollection 2014.
5
Synaptic dynamics: linear model and adaptation algorithm.
Neural Netw. 2014 Aug;56:49-68. doi: 10.1016/j.neunet.2014.04.001. Epub 2014 Apr 28.
6
Optimal spike-timing-dependent plasticity for precise action potential firing in supervised learning.
Neural Comput. 2006 Jun;18(6):1318-48. doi: 10.1162/neco.2006.18.6.1318.
7
Spatiotemporal learning in analog neural networks using spike-timing-dependent synaptic plasticity.
Phys Rev E Stat Nonlin Soft Matter Phys. 2007 May;75(5 Pt 1):051917. doi: 10.1103/PhysRevE.75.051917. Epub 2007 May 29.
8
STDP provides the substrate for igniting synfire chains by spatiotemporal input patterns.
Neural Comput. 2008 Feb;20(2):415-35. doi: 10.1162/neco.2007.11-05-043.
10
A biologically plausible supervised learning method for spiking neural networks using the symmetric STDP rule.
Neural Netw. 2020 Jan;121:387-395. doi: 10.1016/j.neunet.2019.09.007. Epub 2019 Sep 27.

引用本文的文献

2
Synapses learn to utilize stochastic pre-synaptic release for the prediction of postsynaptic dynamics.
PLoS Comput Biol. 2024 Nov 4;20(11):e1012531. doi: 10.1371/journal.pcbi.1012531. eCollection 2024 Nov.
3
Fast adaptation to rule switching using neuronal surprise.
PLoS Comput Biol. 2024 Feb 20;20(2):e1011839. doi: 10.1371/journal.pcbi.1011839. eCollection 2024 Feb.
4
Local minimization of prediction errors drives learning of invariant object representations in a generative network model of visual perception.
Front Comput Neurosci. 2023 Sep 25;17:1207361. doi: 10.3389/fncom.2023.1207361. eCollection 2023.
5
Error-based or target-based? A unified framework for learning in recurrent spiking networks.
PLoS Comput Biol. 2022 Jun 21;18(6):e1010221. doi: 10.1371/journal.pcbi.1010221. eCollection 2022 Jun.
6
Learning as filtering: Implications for spike-based plasticity.
PLoS Comput Biol. 2022 Feb 23;18(2):e1009721. doi: 10.1371/journal.pcbi.1009721. eCollection 2022 Feb.
7
Self-healing codes: How stable neural populations can track continually reconfiguring neural representations.
Proc Natl Acad Sci U S A. 2022 Feb 15;119(7). doi: 10.1073/pnas.2106692119.
8
Canonical neural networks perform active inference.
Commun Biol. 2022 Jan 14;5(1):55. doi: 10.1038/s42003-021-02994-2.
9
Brain-inspired global-local learning incorporated with neuromorphic computing.
Nat Commun. 2022 Jan 10;13(1):65. doi: 10.1038/s41467-021-27653-2.
10
Mapping input noise to escape noise in integrate-and-fire neurons: a level-crossing approach.
Biol Cybern. 2021 Oct;115(5):539-562. doi: 10.1007/s00422-021-00899-1. Epub 2021 Oct 19.

本文引用的文献

1
Astrocyte signaling controls spike timing-dependent depression at neocortical synapses.
Nat Neurosci. 2012 Mar 25;15(5):746-53. doi: 10.1038/nn.3075.
2
Activity recall in a visual cortical ensemble.
Nat Neurosci. 2012 Jan 22;15(3):449-55, S1-2. doi: 10.1038/nn.3036.
3
Local Ca2+ detection and modulation of synaptic release by astrocytes.
Nat Neurosci. 2011 Sep 11;14(10):1276-84. doi: 10.1038/nn.2929.
4
Internal representation of task rules by recurrent dynamics: the importance of the diversity of neural responses.
Front Comput Neurosci. 2010 Oct 4;4:24. doi: 10.3389/fncom.2010.00024. eCollection 2010.
5
Connectivity reflects coding: a model of voltage-based STDP with homeostasis.
Nat Neurosci. 2010 Mar;13(3):344-52. doi: 10.1038/nn.2479. Epub 2010 Jan 24.
6
Long-term potentiation depends on release of D-serine from astrocytes.
Nature. 2010 Jan 14;463(7278):232-6. doi: 10.1038/nature08673.
8
Bayesian retrieval in associative memories with storage errors.
IEEE Trans Neural Netw. 1998;9(4):705-13. doi: 10.1109/72.701183.
9
Triplets of spikes in a model of spike timing-dependent plasticity.
J Neurosci. 2006 Sep 20;26(38):9673-82. doi: 10.1523/JNEUROSCI.1425-06.2006.
10
Optimal spike-timing-dependent plasticity for precise action potential firing in supervised learning.
Neural Comput. 2006 Jun;18(6):1318-48. doi: 10.1162/neco.2006.18.6.1318.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验