• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

前突触池修饰(PSPM):递归尖峰神经网络的监督学习过程。

Pre-Synaptic Pool Modification (PSPM): A supervised learning procedure for recurrent spiking neural networks.

机构信息

Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, MO, United States of America.

Department of Physics, Washington University in St. Louis, St. Louis, MO, United States of America.

出版信息

PLoS One. 2020 Feb 24;15(2):e0229083. doi: 10.1371/journal.pone.0229083. eCollection 2020.

DOI:10.1371/journal.pone.0229083
PMID:32092107
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7039446/
Abstract

Learning synaptic weights of spiking neural network (SNN) models that can reproduce target spike trains from provided neural firing data is a central problem in computational neuroscience and spike-based computing. The discovery of the optimal weight values can be posed as a supervised learning task wherein the weights of the model network are chosen to maximize the similarity between the target spike trains and the model outputs. It is still largely unknown whether optimizing spike train similarity of highly recurrent SNNs produces weight matrices similar to those of the ground truth model. To this end, we propose flexible heuristic supervised learning rules, termed Pre-Synaptic Pool Modification (PSPM), that rely on stochastic weight updates in order to produce spikes within a short window of the desired times and eliminate spikes outside of this window. PSPM improves spike train similarity for all-to-all SNNs and makes no assumption about the post-synaptic potential of the neurons or the structure of the network since no gradients are required. We test whether optimizing for spike train similarity entails the discovery of accurate weights and explore the relative contributions of local and homeostatic weight updates. Although PSPM improves similarity between spike trains, the learned weights often differ from the weights of the ground truth model, implying that connectome inference from spike data may require additional constraints on connectivity statistics. We also find that spike train similarity is sensitive to local updates, but other measures of network activity such as avalanche distributions, can be learned through synaptic homeostasis.

摘要

学习能够根据提供的神经放电数据重现目标尖峰序列的尖峰神经网络 (SNN) 模型的突触权重,是计算神经科学和基于尖峰的计算中的一个核心问题。发现最佳权重值可以被描述为监督学习任务,其中模型网络的权重被选择为最大化目标尖峰序列与模型输出之间的相似性。目前还不完全清楚优化高度递归 SNN 的尖峰序列相似性是否会产生与真实模型权重矩阵相似的权重矩阵。为此,我们提出了灵活的启发式监督学习规则,称为前突触池修改 (PSPM),它依赖于随机权重更新,以便在期望时间的短窗口内产生尖峰并消除该窗口外的尖峰。PSPM 提高了全连接 SNN 的尖峰序列相似性,并且不依赖于神经元的突触后电位或网络结构,因为不需要梯度。我们测试了优化尖峰序列相似性是否需要发现准确的权重,并探讨了局部和动态平衡权重更新的相对贡献。尽管 PSPM 提高了尖峰序列之间的相似性,但学习到的权重通常与真实模型的权重不同,这意味着从尖峰数据推断连接体可能需要对连接体统计数据施加额外的约束。我们还发现尖峰序列相似性对局部更新很敏感,但网络活动的其他度量,如雪崩分布,可以通过突触动态平衡来学习。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2874/7039446/e45213772d4c/pone.0229083.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2874/7039446/dd873e53ad13/pone.0229083.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2874/7039446/ba982f931f15/pone.0229083.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2874/7039446/4db214839ab1/pone.0229083.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2874/7039446/9fd56e304ac4/pone.0229083.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2874/7039446/570ea9f7b8a6/pone.0229083.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2874/7039446/a861a7cdbd0b/pone.0229083.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2874/7039446/30044296c73c/pone.0229083.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2874/7039446/62b32f4d0255/pone.0229083.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2874/7039446/e45213772d4c/pone.0229083.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2874/7039446/dd873e53ad13/pone.0229083.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2874/7039446/ba982f931f15/pone.0229083.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2874/7039446/4db214839ab1/pone.0229083.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2874/7039446/9fd56e304ac4/pone.0229083.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2874/7039446/570ea9f7b8a6/pone.0229083.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2874/7039446/a861a7cdbd0b/pone.0229083.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2874/7039446/30044296c73c/pone.0229083.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2874/7039446/62b32f4d0255/pone.0229083.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2874/7039446/e45213772d4c/pone.0229083.g009.jpg

相似文献

1
Pre-Synaptic Pool Modification (PSPM): A supervised learning procedure for recurrent spiking neural networks.前突触池修饰(PSPM):递归尖峰神经网络的监督学习过程。
PLoS One. 2020 Feb 24;15(2):e0229083. doi: 10.1371/journal.pone.0229083. eCollection 2020.
2
A supervised multi-spike learning algorithm based on gradient descent for spiking neural networks.基于梯度下降的监督多尖峰学习算法在尖峰神经网络中的应用。
Neural Netw. 2013 Jul;43:99-113. doi: 10.1016/j.neunet.2013.02.003. Epub 2013 Feb 16.
3
Dynamic evolving spiking neural networks for on-line spatio- and spectro-temporal pattern recognition.用于在线时空谱模式识别的动态进化尖峰神经网络。
Neural Netw. 2013 May;41:188-201. doi: 10.1016/j.neunet.2012.11.014. Epub 2012 Dec 20.
4
An online supervised learning method based on gradient descent for spiking neurons.一种基于梯度下降的用于脉冲神经元的在线监督学习方法。
Neural Netw. 2017 Sep;93:7-20. doi: 10.1016/j.neunet.2017.04.010. Epub 2017 Apr 27.
5
Supervised Learning in Multilayer Spiking Neural Networks With Spike Temporal Error Backpropagation.监督学习在基于尖峰时间误差反向传播的多层尖峰神经网络中的应用。
IEEE Trans Neural Netw Learn Syst. 2023 Dec;34(12):10141-10153. doi: 10.1109/TNNLS.2022.3164930. Epub 2023 Nov 30.
6
A Scalable Weight-Free Learning Algorithm for Regulatory Control of Cell Activity in Spiking Neuronal Networks.一种用于尖峰神经元网络中细胞活动调节控制的可扩展无权重学习算法。
Int J Neural Syst. 2018 Mar;28(2):1750015. doi: 10.1142/S0129065717500150. Epub 2016 Dec 22.
7
A biologically plausible supervised learning method for spiking neural networks using the symmetric STDP rule.基于对称 STDP 规则的尖峰神经网络的生物合理有监督学习方法。
Neural Netw. 2020 Jan;121:387-395. doi: 10.1016/j.neunet.2019.09.007. Epub 2019 Sep 27.
8
Reading-out task variables as a low-dimensional reconstruction of neural spike trains in single trials.在单试次中,将读出任务变量作为神经尖峰序列的低维重建。
PLoS One. 2019 Oct 17;14(10):e0222649. doi: 10.1371/journal.pone.0222649. eCollection 2019.
9
An optimal time interval of input spikes involved in synaptic adjustment of spike sequence learning.参与脉冲序列学习的突触调整的最优输入脉冲时间间隔。
Neural Netw. 2019 Aug;116:11-24. doi: 10.1016/j.neunet.2019.03.017. Epub 2019 Apr 1.
10
Span: spike pattern association neuron for learning spatio-temporal spike patterns.用于学习时空尖峰模式的尖峰模式关联神经元。
Int J Neural Syst. 2012 Aug;22(4):1250012. doi: 10.1142/S0129065712500128. Epub 2012 Jul 12.

引用本文的文献

1
Low-power artificial neuron networks with enhanced synaptic functionality using dual transistor and dual memristor.采用双晶体管和双忆阻器的具有增强突触功能的低功耗人工神经网络。
PLoS One. 2025 Jan 27;20(1):e0318009. doi: 10.1371/journal.pone.0318009. eCollection 2025.

本文引用的文献

1
Deep learning in spiking neural networks.深度学习在尖峰神经网络中的应用。
Neural Netw. 2019 Mar;111:47-63. doi: 10.1016/j.neunet.2018.12.002. Epub 2018 Dec 18.
2
Attractor dynamics of a Boolean model of a brain circuit controlled by multiple parameters.由多个参数控制的脑回路布尔模型的吸引子动力学
Chaos. 2018 Oct;28(10):106318. doi: 10.1063/1.5042312.
3
Connectivity inference from neural recording data: Challenges, mathematical bases and research directions.从神经记录数据中进行连接推断:挑战、数学基础和研究方向。
Neural Netw. 2018 Jun;102:120-137. doi: 10.1016/j.neunet.2018.02.016. Epub 2018 Mar 10.
4
Criticality predicts maximum irregularity in recurrent networks of excitatory nodes.临界性预测兴奋性节点循环网络中的最大不规则性。
PLoS One. 2017 Aug 17;12(8):e0182501. doi: 10.1371/journal.pone.0182501. eCollection 2017.
5
Neocortical activity is stimulus- and scale-invariant.新皮质活动具有刺激和尺度不变性。
PLoS One. 2017 May 10;12(5):e0177396. doi: 10.1371/journal.pone.0177396. eCollection 2017.
6
The dialectic of Hebb and homeostasis.赫布理论与内稳态的辩证关系。
Philos Trans R Soc Lond B Biol Sci. 2017 Mar 5;372(1715). doi: 10.1098/rstb.2016.0258.
7
Integrating Hebbian and homeostatic plasticity: the current state of the field and future research directions.整合赫布可塑性和稳态可塑性:该领域的现状与未来研究方向。
Philos Trans R Soc Lond B Biol Sci. 2017 Mar 5;372(1715). doi: 10.1098/rstb.2016.0158.
8
Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding.用于精确时间编码的脉冲神经网络中的监督学习。
PLoS One. 2016 Aug 17;11(8):e0161335. doi: 10.1371/journal.pone.0161335. eCollection 2016.
9
Artificial brains. A million spiking-neuron integrated circuit with a scalable communication network and interface.人工大脑。具有可扩展通信网络和接口的 100 万个尖峰神经元集成电路。
Science. 2014 Aug 8;345(6197):668-73. doi: 10.1126/science.1254642. Epub 2014 Aug 7.
10
An attractor-based complexity measurement for Boolean recurrent neural networks.基于吸引子的布尔递归神经网络复杂度度量。
PLoS One. 2014 Apr 11;9(4):e94204. doi: 10.1371/journal.pone.0094204. eCollection 2014.