Kotelnikov Institute of Radio Engineering and Electronics of Russian Academy of Sciences (Ulyanovsk branch), 48/2 Goncharov Str., Ulyanovsk 432071, Russia.
Ulyanovsk State Technical University, 32 Severny Venets, Ulyanovsk 432027, Russia.
Neural Netw. 2022 Nov;155:512-522. doi: 10.1016/j.neunet.2022.09.003. Epub 2022 Sep 7.
Artificial neural networks (ANNs) experience catastrophic forgetting (CF) during sequential learning. In contrast, the brain can learn continuously without any signs of catastrophic forgetting. Spiking neural networks (SNNs) are the next generation of ANNs with many features borrowed from biological neural networks. Thus, SNNs potentially promise better resilience to CF. In this paper, we study the susceptibility of SNNs to CF and test several biologically inspired methods for mitigating catastrophic forgetting. SNNs are trained with biologically plausible local training rules based on spike-timing-dependent plasticity (STDP). Local training prohibits the direct use of CF prevention methods based on gradients of a global loss function. We developed and tested the method to determine the importance of synapses (weights) based on stochastic Langevin dynamics without the need for the gradients. Several other methods of catastrophic forgetting prevention adapted from analog neural networks were tested as well. The experiments were performed on freely available datasets in the SpykeTorch environment.
人工神经网络 (ANNs) 在顺序学习过程中会经历灾难性遗忘 (CF)。相比之下,大脑可以持续学习,没有任何 CF 的迹象。脉冲神经网络 (SNNs) 是下一代的 ANNs,它借鉴了许多来自生物神经网络的特性。因此,SNNs 有可能更好地抵御 CF。在本文中,我们研究了 SNNs 对 CF 的敏感性,并测试了几种减轻灾难性遗忘的生物启发方法。SNNs 是基于尖峰时间依赖性可塑性 (STDP) 的基于生物学合理性的局部训练规则进行训练的。局部训练禁止直接使用基于全局损失函数梯度的 CF 预防方法。我们开发并测试了一种基于随机 Langevin 动力学的确定突触(权重)重要性的方法,而不需要梯度。还测试了几种从模拟神经网络中改编的防止灾难性遗忘的方法。实验是在 SpykeTorch 环境中免费提供的数据集上进行的。