Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, MO, United States of America.
Department of Physics, Washington University in St. Louis, St. Louis, MO, United States of America.
PLoS One. 2020 Feb 24;15(2):e0229083. doi: 10.1371/journal.pone.0229083. eCollection 2020.
Learning synaptic weights of spiking neural network (SNN) models that can reproduce target spike trains from provided neural firing data is a central problem in computational neuroscience and spike-based computing. The discovery of the optimal weight values can be posed as a supervised learning task wherein the weights of the model network are chosen to maximize the similarity between the target spike trains and the model outputs. It is still largely unknown whether optimizing spike train similarity of highly recurrent SNNs produces weight matrices similar to those of the ground truth model. To this end, we propose flexible heuristic supervised learning rules, termed Pre-Synaptic Pool Modification (PSPM), that rely on stochastic weight updates in order to produce spikes within a short window of the desired times and eliminate spikes outside of this window. PSPM improves spike train similarity for all-to-all SNNs and makes no assumption about the post-synaptic potential of the neurons or the structure of the network since no gradients are required. We test whether optimizing for spike train similarity entails the discovery of accurate weights and explore the relative contributions of local and homeostatic weight updates. Although PSPM improves similarity between spike trains, the learned weights often differ from the weights of the ground truth model, implying that connectome inference from spike data may require additional constraints on connectivity statistics. We also find that spike train similarity is sensitive to local updates, but other measures of network activity such as avalanche distributions, can be learned through synaptic homeostasis.
学习能够根据提供的神经放电数据重现目标尖峰序列的尖峰神经网络 (SNN) 模型的突触权重,是计算神经科学和基于尖峰的计算中的一个核心问题。发现最佳权重值可以被描述为监督学习任务,其中模型网络的权重被选择为最大化目标尖峰序列与模型输出之间的相似性。目前还不完全清楚优化高度递归 SNN 的尖峰序列相似性是否会产生与真实模型权重矩阵相似的权重矩阵。为此,我们提出了灵活的启发式监督学习规则,称为前突触池修改 (PSPM),它依赖于随机权重更新,以便在期望时间的短窗口内产生尖峰并消除该窗口外的尖峰。PSPM 提高了全连接 SNN 的尖峰序列相似性,并且不依赖于神经元的突触后电位或网络结构,因为不需要梯度。我们测试了优化尖峰序列相似性是否需要发现准确的权重,并探讨了局部和动态平衡权重更新的相对贡献。尽管 PSPM 提高了尖峰序列之间的相似性,但学习到的权重通常与真实模型的权重不同,这意味着从尖峰数据推断连接体可能需要对连接体统计数据施加额外的约束。我们还发现尖峰序列相似性对局部更新很敏感,但网络活动的其他度量,如雪崩分布,可以通过突触动态平衡来学习。