Department of Bioengineering, Imperial College London, London, United Kingdom.
Department of Mathematics, Imperial College London, London, United Kingdom.
PLoS Comput Biol. 2020 Jan 21;16(1):e1007606. doi: 10.1371/journal.pcbi.1007606. eCollection 2020 Jan.
Learning to produce spatiotemporal sequences is a common task that the brain has to solve. The same neurons may be used to produce different sequential behaviours. The way the brain learns and encodes such tasks remains unknown as current computational models do not typically use realistic biologically-plausible learning. Here, we propose a model where a spiking recurrent network of excitatory and inhibitory spiking neurons drives a read-out layer: the dynamics of the driver recurrent network is trained to encode time which is then mapped through the read-out neurons to encode another dimension, such as space or a phase. Different spatiotemporal patterns can be learned and encoded through the synaptic weights to the read-out neurons that follow common Hebbian learning rules. We demonstrate that the model is able to learn spatiotemporal dynamics on time scales that are behaviourally relevant and we show that the learned sequences are robustly replayed during a regime of spontaneous activity.
学习生成时空序列是大脑必须解决的一项常见任务。相同的神经元可能用于产生不同的序列行为。由于当前的计算模型通常不使用现实的生物上合理的学习,因此大脑学习和编码此类任务的方式仍然未知。在这里,我们提出了一个模型,其中一个兴奋性和抑制性尖峰神经元的尖峰递归网络驱动一个读出层:驱动递归网络的动力学被训练来编码时间,然后通过读出神经元映射到另一个维度,例如空间或相位。通过跟随常见的赫布学习规则的读出神经元的突触权重,可以学习和编码不同的时空模式。我们证明该模型能够学习与行为相关的时间尺度上的时空动力学,并且我们表明在自发活动的范围内可以稳健地重放学习到的序列。