Xu Mingkun, Liu Faqiang, Hu Yifan, Li Hongyi, Wei Yuanyuan, Zhong Shuai, Pei Jing, Deng Lei
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):5151-5165. doi: 10.1109/TNNLS.2024.3373599. Epub 2025 Feb 28.
Synaptic plasticity plays a critical role in the expression power of brain neural networks. Among diverse plasticity rules, synaptic scaling presents indispensable effects on homeostasis maintenance and synaptic strength regulation. In the current modeling of brain-inspired spiking neural networks (SNN), backpropagation through time is widely adopted because it can achieve high performance using a small number of time steps. Nevertheless, the synaptic scaling mechanism has not yet been well touched. In this work, we propose an experience-dependent adaptive synaptic scaling mechanism (AS-SNN) for spiking neural networks. The learning process has two stages: First, in the forward path, adaptive short-term potentiation or depression is triggered for each synapse according to afferent stimuli intensity accumulated by presynaptic historical neural activities. Second, in the backward path, long-term consolidation is executed through gradient signals regulated by the corresponding scaling factor. This mechanism shapes the pattern selectivity of synapses and the information transfer they mediate. We theoretically prove that the proposed adaptive synaptic scaling function follows a contraction map and finally converges to an expected fixed point, in accordance with state-of-the-art results in three tasks on perturbation resistance, continual learning, and graph learning. Specifically, for the perturbation resistance and continual learning tasks, our approach improves the accuracy on the N-MNIST benchmark over the baseline by 44% and 25%, respectively. An expected firing rate callback and sparse coding can be observed in graph learning. Extensive experiments on ablation study and cost evaluation evidence the effectiveness and efficiency of our nonparametric adaptive scaling method, which demonstrates the great potential of SNN in continual learning and robust learning.
突触可塑性在大脑神经网络的表达能力中起着关键作用。在各种可塑性规则中,突触缩放对稳态维持和突触强度调节具有不可或缺的作用。在当前受大脑启发的脉冲神经网络(SNN)建模中,时间反向传播被广泛采用,因为它可以通过少量时间步长实现高性能。然而,突触缩放机制尚未得到很好的探讨。在这项工作中,我们为脉冲神经网络提出了一种基于经验的自适应突触缩放机制(AS-SNN)。学习过程有两个阶段:首先,在前向路径中,根据突触前历史神经活动积累的传入刺激强度,为每个突触触发自适应短期增强或抑制。其次,在反向路径中,通过由相应缩放因子调节的梯度信号执行长期巩固。这种机制塑造了突触的模式选择性及其介导的信息传递。我们从理论上证明,所提出的自适应突触缩放函数遵循收缩映射,并最终收敛到预期的不动点,这与在抗扰动、持续学习和图学习这三个任务中的最新结果一致。具体而言,对于抗扰动和持续学习任务,我们的方法在N-MNIST基准上的准确率分别比基线提高了44%和25%。在图学习中可以观察到预期的 firing rate回调和稀疏编码。关于消融研究和成本评估的大量实验证明了我们的非参数自适应缩放方法的有效性和效率,这表明SNN在持续学习和鲁棒学习方面具有巨大潜力。