Laboratory of Computational Embodied Neuroscience, Institute of Cognitive Sciences and Technologies, National Research Council of Italy (LOCEN-ISTC-CNR), Roma, Italy.
PLoS Comput Biol. 2018 Aug 28;14(8):e1006227. doi: 10.1371/journal.pcbi.1006227. eCollection 2018 Aug.
Learning in biologically relevant neural-network models usually relies on Hebb learning rules. The typical implementations of these rules change the synaptic strength on the basis of the co-occurrence of the neural events taking place at a certain time in the pre- and post-synaptic neurons. Differential Hebbian learning (DHL) rules, instead, are able to update the synapse by taking into account the temporal relation, captured with derivatives, between the neural events happening in the recent past. The few DHL rules proposed so far can update the synaptic weights only in few ways: this is a limitation for the study of dynamical neurons and neural-network models. Moreover, empirical evidence on brain spike-timing-dependent plasticity (STDP) shows that different neurons express a surprisingly rich repertoire of different learning processes going far beyond existing DHL rules. This opens up a second problem of how capturing such processes with DHL rules. Here we propose a general DHL (G-DHL) rule generating the existing rules and many others. The rule has a high expressiveness as it combines in different ways the pre- and post-synaptic neuron signals and derivatives. The rule flexibility is shown by applying it to various signals of artificial neurons and by fitting several different STDP experimental data sets. To these purposes, we propose techniques to pre-process the neural signals and capture the temporal relations between the neural events of interest. We also propose a procedure to automatically identify the rule components and parameters that best fit different STDP data sets, and show how the identified components might be used to heuristically guide the search of the biophysical mechanisms underlying STDP. Overall, the results show that the G-DHL rule represents a useful means to study time-sensitive learning processes in both artificial neural networks and brain.
在具有生物学意义的神经网络模型中,学习通常依赖于赫布学习规则。这些规则的典型实现方式是根据前馈和后馈神经元在特定时间发生的神经事件的共同出现来改变突触强度。相反,微分赫布学习(DHL)规则能够通过考虑过去发生的神经事件之间的时间关系(用导数捕获)来更新突触。迄今为止提出的少数几个 DHL 规则只能以几种方式更新突触权重:这对于动态神经元和神经网络模型的研究是一个限制。此外,关于大脑尖峰时间依赖可塑性(STDP)的经验证据表明,不同的神经元表达了令人惊讶的丰富的不同学习过程,远远超出了现有的 DHL 规则。这就提出了第二个问题,即如何用 DHL 规则来捕捉这些过程。在这里,我们提出了一种通用的 DHL(G-DHL)规则,它可以生成现有的规则和许多其他规则。该规则具有很高的表现力,因为它以不同的方式结合了前馈和后馈神经元信号及其导数。该规则的灵活性通过将其应用于人工神经元的各种信号并拟合多个不同的 STDP 实验数据集来证明。为此,我们提出了一些技术来预处理神经信号,并捕捉感兴趣的神经事件之间的时间关系。我们还提出了一种自动识别最佳拟合不同 STDP 数据集的规则组件和参数的过程,并展示了如何使用所识别的组件启发式地指导 STDP 背后的生物物理机制的搜索。总的来说,结果表明,G-DHL 规则是研究人工神经网络和大脑中时间敏感学习过程的有用手段。