神经元通过尖峰时间依赖性可塑性能够学习什么?

What can a neuron learn with spike-timing-dependent plasticity?

作者信息

Legenstein Robert, Naeger Christian, Maass Wolfgang

机构信息

Institute for Theoretical Computer Science, Technische Universitaet Graz, A-8010 Graz, Austria.

出版信息

Neural Comput. 2005 Nov;17(11):2337-82. doi: 10.1162/0899766054796888.

Abstract

Spiking neurons are very flexible computational modules, which can implement with different values of their adjustable synaptic parameters an enormous variety of different transformations F from input spike trains to output spike trains. We examine in this letter the question to what extent a spiking neuron with biologically realistic models for dynamic synapses can be taught via spike-timing-dependent plasticity (STDP) to implement a given transformation F. We consider a supervised learning paradigm where during training, the output of the neuron is clamped to the target signal (teacher forcing). The well-known perceptron convergence theorem asserts the convergence of a simple supervised learning algorithm for drastically simplified neuron models (McCulloch-Pitts neurons). We show that in contrast to the perceptron convergence theorem, no theoretical guarantee can be given for the convergence of STDP with teacher forcing that holds for arbitrary input spike patterns. On the other hand, we prove that average case versions of the perceptron convergence theorem hold for STDP in the case of uncorrelated and correlated Poisson input spike trains and simple models for spiking neurons. For a wide class of cross-correlation functions of the input spike trains, the resulting necessary and sufficient condition can be formulated in terms of linear separability, analogously as the well-known condition of learnability by perceptrons. However, the linear separability criterion has to be applied here to the columns of the correlation matrix of the Poisson input. We demonstrate through extensive computer simulations that the theoretically predicted convergence of STDP with teacher forcing also holds for more realistic models for neurons, dynamic synapses, and more general input distributions. In addition, we show through computer simulations that these positive learning results hold not only for the common interpretation of STDP, where STDP changes the weights of synapses, but also for a more realistic interpretation suggested by experimental data where STDP modulates the initial release probability of dynamic synapses.

摘要

脉冲神经元是非常灵活的计算模块,通过调整其突触参数的不同值,它们可以实现从输入脉冲序列到输出脉冲序列的极其多样的不同变换F。在这封信中,我们研究了一个问题,即具有生物现实动态突触模型的脉冲神经元在多大程度上可以通过脉冲时间依赖可塑性(STDP)被训练来实现给定的变换F。我们考虑一种监督学习范式,在训练期间,神经元的输出被钳制到目标信号(教师强制)。著名的感知机收敛定理断言了一种简单监督学习算法对于大幅简化的神经元模型(麦卡洛克 - 皮茨神经元)的收敛性。我们表明,与感知机收敛定理不同,对于任意输入脉冲模式,无法给出关于教师强制下STDP收敛的理论保证。另一方面,我们证明了在不相关和相关泊松输入脉冲序列以及脉冲神经元简单模型的情况下,感知机收敛定理的平均情况版本对STDP成立。对于输入脉冲序列的一大类互相关函数,所得的充要条件可以根据线性可分性来表述,类似于感知机可学习性的著名条件。然而,这里线性可分性准则必须应用于泊松输入相关矩阵的列。我们通过广泛的计算机模拟表明,理论预测的教师强制下STDP的收敛对于更现实的神经元、动态突触模型以及更一般的输入分布也成立。此外,我们通过计算机模拟表明,这些积极的学习结果不仅适用于STDP改变突触权重的常见解释,也适用于实验数据所建议的更现实的解释,即STDP调节动态突触的初始释放概率。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索