Stapmanns Jonas, Hahne Jan, Helias Moritz, Bolten Matthias, Diesmann Markus, Dahmen David
Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany.
Department of Physics, Institute for Theoretical Solid State Physics, RWTH Aachen University, Aachen, Germany.
Front Neuroinform. 2021 Jun 10;15:609147. doi: 10.3389/fninf.2021.609147. eCollection 2021.
Due to the point-like nature of neuronal spiking, efficient neural network simulators often employ event-based simulation schemes for synapses. Yet many types of synaptic plasticity rely on the membrane potential of the postsynaptic cell as a third factor in addition to pre- and postsynaptic spike times. In some learning rules membrane potentials not only influence synaptic weight changes at the time points of spike events but in a continuous manner. In these cases, synapses therefore require information on the full time course of membrane potentials to update their strength which a priori suggests a continuous update in a time-driven manner. The latter hinders scaling of simulations to realistic cortical network sizes and relevant time scales for learning. Here, we derive two efficient algorithms for archiving postsynaptic membrane potentials, both compatible with modern simulation engines based on event-based synapse updates. We theoretically contrast the two algorithms with a time-driven synapse update scheme to analyze advantages in terms of memory and computations. We further present a reference implementation in the spiking neural network simulator NEST for two prototypical voltage-based plasticity rules: the Clopath rule and the Urbanczik-Senn rule. For both rules, the two event-based algorithms significantly outperform the time-driven scheme. Depending on the amount of data to be stored for plasticity, which heavily differs between the rules, a strong performance increase can be achieved by compressing or sampling of information on membrane potentials. Our results on computational efficiency related to archiving of information provide guidelines for the design of learning rules in order to make them practically usable in large-scale networks.
由于神经元放电具有点状性质,高效的神经网络模拟器通常采用基于事件的突触模拟方案。然而,许多类型的突触可塑性除了依赖突触前和突触后放电时间外,还将突触后细胞的膜电位作为第三个因素。在一些学习规则中,膜电位不仅在放电事件的时间点影响突触权重变化,而且以连续的方式起作用。在这些情况下,突触因此需要关于膜电位完整时间进程的信息来更新其强度,这先验地表明需要以时间驱动的方式进行连续更新。后者阻碍了将模拟扩展到现实的皮层网络规模和相关的学习时间尺度。在这里,我们推导了两种用于存档突触后膜电位的高效算法,这两种算法都与基于事件的突触更新的现代模拟引擎兼容。我们从理论上将这两种算法与时间驱动的突触更新方案进行对比,以分析在内存和计算方面的优势。我们还在脉冲神经网络模拟器NEST中针对两种典型的基于电压的可塑性规则:克洛帕特规则和乌尔班茨克 - 森规则,给出了参考实现。对于这两种规则,基于事件的两种算法都显著优于时间驱动方案。根据可塑性所需存储的数据量(这在不同规则之间有很大差异),通过压缩或采样膜电位信息可以实现显著的性能提升。我们关于与信息存档相关的计算效率的结果为学习规则的设计提供了指导方针,以便使其在大规模网络中实际可用。