• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于持续学习和增强鲁棒性的脉冲神经网络中的自适应突触缩放

Adaptive Synaptic Scaling in Spiking Networks for Continual Learning and Enhanced Robustness.

作者信息

Xu Mingkun, Liu Faqiang, Hu Yifan, Li Hongyi, Wei Yuanyuan, Zhong Shuai, Pei Jing, Deng Lei

出版信息

IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):5151-5165. doi: 10.1109/TNNLS.2024.3373599. Epub 2025 Feb 28.

DOI:10.1109/TNNLS.2024.3373599
PMID:38536699
Abstract

Synaptic plasticity plays a critical role in the expression power of brain neural networks. Among diverse plasticity rules, synaptic scaling presents indispensable effects on homeostasis maintenance and synaptic strength regulation. In the current modeling of brain-inspired spiking neural networks (SNN), backpropagation through time is widely adopted because it can achieve high performance using a small number of time steps. Nevertheless, the synaptic scaling mechanism has not yet been well touched. In this work, we propose an experience-dependent adaptive synaptic scaling mechanism (AS-SNN) for spiking neural networks. The learning process has two stages: First, in the forward path, adaptive short-term potentiation or depression is triggered for each synapse according to afferent stimuli intensity accumulated by presynaptic historical neural activities. Second, in the backward path, long-term consolidation is executed through gradient signals regulated by the corresponding scaling factor. This mechanism shapes the pattern selectivity of synapses and the information transfer they mediate. We theoretically prove that the proposed adaptive synaptic scaling function follows a contraction map and finally converges to an expected fixed point, in accordance with state-of-the-art results in three tasks on perturbation resistance, continual learning, and graph learning. Specifically, for the perturbation resistance and continual learning tasks, our approach improves the accuracy on the N-MNIST benchmark over the baseline by 44% and 25%, respectively. An expected firing rate callback and sparse coding can be observed in graph learning. Extensive experiments on ablation study and cost evaluation evidence the effectiveness and efficiency of our nonparametric adaptive scaling method, which demonstrates the great potential of SNN in continual learning and robust learning.

摘要

突触可塑性在大脑神经网络的表达能力中起着关键作用。在各种可塑性规则中,突触缩放对稳态维持和突触强度调节具有不可或缺的作用。在当前受大脑启发的脉冲神经网络(SNN)建模中,时间反向传播被广泛采用,因为它可以通过少量时间步长实现高性能。然而,突触缩放机制尚未得到很好的探讨。在这项工作中,我们为脉冲神经网络提出了一种基于经验的自适应突触缩放机制(AS-SNN)。学习过程有两个阶段:首先,在前向路径中,根据突触前历史神经活动积累的传入刺激强度,为每个突触触发自适应短期增强或抑制。其次,在反向路径中,通过由相应缩放因子调节的梯度信号执行长期巩固。这种机制塑造了突触的模式选择性及其介导的信息传递。我们从理论上证明,所提出的自适应突触缩放函数遵循收缩映射,并最终收敛到预期的不动点,这与在抗扰动、持续学习和图学习这三个任务中的最新结果一致。具体而言,对于抗扰动和持续学习任务,我们的方法在N-MNIST基准上的准确率分别比基线提高了44%和25%。在图学习中可以观察到预期的 firing rate回调和稀疏编码。关于消融研究和成本评估的大量实验证明了我们的非参数自适应缩放方法的有效性和效率,这表明SNN在持续学习和鲁棒学习方面具有巨大潜力。

相似文献

1
Adaptive Synaptic Scaling in Spiking Networks for Continual Learning and Enhanced Robustness.用于持续学习和增强鲁棒性的脉冲神经网络中的自适应突触缩放
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):5151-5165. doi: 10.1109/TNNLS.2024.3373599. Epub 2025 Feb 28.
2
Similarity-based context aware continual learning for spiking neural networks.基于相似度的上下文感知持续学习的脉冲神经网络
Neural Netw. 2025 Apr;184:107037. doi: 10.1016/j.neunet.2024.107037. Epub 2024 Dec 12.
3
Reinforcement Learning in Spiking Neural Networks with Stochastic and Deterministic Synapses.具有随机和确定性突触的尖峰神经网络中的强化学习。
Neural Comput. 2019 Dec;31(12):2368-2389. doi: 10.1162/neco_a_01238. Epub 2019 Oct 15.
4
CDNA-SNN: A New Spiking Neural Network for Pattern Classification Using Neuronal Assemblies.CDNA-SNN:一种用于模式分类的新型基于神经元集群的脉冲神经网络。
IEEE Trans Neural Netw Learn Syst. 2025 Feb;36(2):2274-2287. doi: 10.1109/TNNLS.2024.3353571. Epub 2025 Feb 6.
5
A biologically plausible supervised learning method for spiking neural networks using the symmetric STDP rule.基于对称 STDP 规则的尖峰神经网络的生物合理有监督学习方法。
Neural Netw. 2020 Jan;121:387-395. doi: 10.1016/j.neunet.2019.09.007. Epub 2019 Sep 27.
6
A review of learning in biologically plausible spiking neural networks.生物启发式尖峰神经网络学习的综述。
Neural Netw. 2020 Feb;122:253-272. doi: 10.1016/j.neunet.2019.09.036. Epub 2019 Oct 11.
7
Synaptic dynamics: linear model and adaptation algorithm.突触动力学:线性模型与自适应算法。
Neural Netw. 2014 Aug;56:49-68. doi: 10.1016/j.neunet.2014.04.001. Epub 2014 Apr 28.
8
Spike-Timing-Dependent Plasticity With Activation-Dependent Scaling for Receptive Fields Development.与激活相关的可塑性用于感受野发展的尖峰时间依赖。
IEEE Trans Neural Netw Learn Syst. 2022 Oct;33(10):5215-5228. doi: 10.1109/TNNLS.2021.3069683. Epub 2022 Oct 5.
9
Low Latency and Sparse Computing Spiking Neural Networks With Self-Driven Adaptive Threshold Plasticity.具有自驱动自适应阈值可塑性的低延迟和稀疏计算脉冲神经网络。
IEEE Trans Neural Netw Learn Syst. 2024 Dec;35(12):17177-17188. doi: 10.1109/TNNLS.2023.3300514. Epub 2024 Dec 2.
10
Introduction to spiking neural networks: Information processing, learning and applications.脉冲神经网络简介:信息处理、学习与应用
Acta Neurobiol Exp (Wars). 2011;71(4):409-33. doi: 10.55782/ane-2011-1862.

引用本文的文献

1
A Reinforced, Event-Driven, and Attention-Based Convolution Spiking Neural Network for Multivariate Time Series Prediction.一种用于多变量时间序列预测的增强型、事件驱动且基于注意力的卷积脉冲神经网络。
Biomimetics (Basel). 2025 Apr 13;10(4):240. doi: 10.3390/biomimetics10040240.
2
Hybrid neural networks for continual learning inspired by corticohippocampal circuits.受皮质-海马回路启发的用于持续学习的混合神经网络。
Nat Commun. 2025 Feb 2;16(1):1272. doi: 10.1038/s41467-025-56405-9.