Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal.
Group for Neural Theory, INSERM U960, Département d'Etudes Cognitives, Ecole Normale Supérieure, Paris, France.
PLoS Comput Biol. 2020 Mar 16;16(3):e1007692. doi: 10.1371/journal.pcbi.1007692. eCollection 2020 Mar.
Networks based on coordinated spike coding can encode information with high efficiency in the spike trains of individual neurons. These networks exhibit single-neuron variability and tuning curves as typically observed in cortex, but paradoxically coincide with a precise, non-redundant spike-based population code. However, it has remained unclear whether the specific synaptic connectivities required in these networks can be learnt with local learning rules. Here, we show how to learn the required architecture. Using coding efficiency as an objective, we derive spike-timing-dependent learning rules for a recurrent neural network, and we provide exact solutions for the networks' convergence to an optimal state. As a result, we deduce an entire network from its input distribution and a firing cost. After learning, basic biophysical quantities such as voltages, firing thresholds, excitation, inhibition, or spikes acquire precise functional interpretations.
基于协调尖峰编码的网络可以在单个神经元的尖峰列车中高效地编码信息。这些网络表现出单个神经元的变异性和调谐曲线,这与皮质中通常观察到的情况一致,但矛盾的是,它们与精确的、非冗余的基于尖峰的群体编码相一致。然而,这些网络中所需的特定突触连接是否可以通过局部学习规则来学习,这一点仍不清楚。在这里,我们展示如何学习所需的架构。我们以编码效率为目标,为一个递归神经网络导出了尖峰时间依赖性学习规则,并为网络收敛到最佳状态提供了精确的解决方案。因此,我们从输入分布和发射成本中推导出整个网络。学习后,基本的生物物理量,如电压、发射阈值、兴奋、抑制或尖峰,获得了精确的功能解释。