• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

尖峰神经网络中复杂功能的代理梯度学习的显著稳健性。

The Remarkable Robustness of Surrogate Gradient Learning for Instilling Complex Function in Spiking Neural Networks.

机构信息

Centre for Neural Circuits and Behaviour, University of Oxford, Oxford OX1 3SR, U.K., and Friedrich Miescher Institute for Biomedical Research, 4058 Basel, Switzerland,

Centre for Neural Circuits and Behaviour, University of Oxford, Oxford OX1 3SR, U.K., and Institute for Science and Technology, 3400 Klosterneuburg, Austria,

出版信息

Neural Comput. 2021 Mar 26;33(4):899-925. doi: 10.1162/neco_a_01367.

DOI:10.1162/neco_a_01367
PMID:33513328
Abstract

Brains process information in spiking neural networks. Their intricate connections shape the diverse functions these networks perform. Yet how network connectivity relates to function is poorly understood, and the functional capabilities of models of spiking networks are still rudimentary. The lack of both theoretical insight and practical algorithms to find the necessary connectivity poses a major impediment to both studying information processing in the brain and building efficient neuromorphic hardware systems. The training algorithms that solve this problem for artificial neural networks typically rely on gradient descent. But doing so in spiking networks has remained challenging due to the nondifferentiable nonlinearity of spikes. To avoid this issue, one can employ surrogate gradients to discover the required connectivity. However, the choice of a surrogate is not unique, raising the question of how its implementation influences the effectiveness of the method. Here, we use numerical simulations to systematically study how essential design parameters of surrogate gradients affect learning performance on a range of classification problems. We show that surrogate gradient learning is robust to different shapes of underlying surrogate derivatives, but the choice of the derivative's scale can substantially affect learning performance. When we combine surrogate gradients with suitable activity regularization techniques, spiking networks perform robust information processing at the sparse activity limit. Our study provides a systematic account of the remarkable robustness of surrogate gradient learning and serves as a practical guide to model functional spiking neural networks.

摘要

大脑在尖峰神经网络中处理信息。它们复杂的连接方式塑造了这些网络执行的各种功能。然而,网络连接如何与功能相关还知之甚少,并且尖峰网络模型的功能能力仍然很初级。缺乏理论洞察力和寻找必要连接的实用算法,这对研究大脑中的信息处理和构建高效的神经形态硬件系统都构成了重大障碍。解决这个问题的人工神经网络的训练算法通常依赖于梯度下降。但是,由于尖峰的不可微非线性,在尖峰网络中做到这一点仍然具有挑战性。为了避免这个问题,可以使用替代梯度来发现所需的连接。然而,替代的选择不是唯一的,这就提出了一个问题,即替代的实现如何影响方法的有效性。在这里,我们使用数值模拟系统地研究了替代梯度的基本设计参数如何影响一系列分类问题的学习性能。我们表明,替代梯度学习对底层替代导数的不同形状具有鲁棒性,但导数的尺度选择会对学习性能产生实质性影响。当我们将替代梯度与合适的活动正则化技术结合使用时,尖峰网络在稀疏活动极限下能够稳健地进行信息处理。我们的研究提供了对替代梯度学习的显著鲁棒性的系统解释,并为模型功能尖峰神经网络提供了实用指南。

相似文献

1
The Remarkable Robustness of Surrogate Gradient Learning for Instilling Complex Function in Spiking Neural Networks.尖峰神经网络中复杂功能的代理梯度学习的显著稳健性。
Neural Comput. 2021 Mar 26;33(4):899-925. doi: 10.1162/neco_a_01367.
2
Surrogate gradients for analog neuromorphic computing.模拟神经形态计算的替代梯度。
Proc Natl Acad Sci U S A. 2022 Jan 25;119(4). doi: 10.1073/pnas.2109194119.
3
Directly training temporal Spiking Neural Network with sparse surrogate gradient.直接使用稀疏替代梯度训练时间尖峰神经网络。
Neural Netw. 2024 Nov;179:106499. doi: 10.1016/j.neunet.2024.106499. Epub 2024 Jul 1.
4
Event-driven implementation of deep spiking convolutional neural networks for supervised classification using the SpiNNaker neuromorphic platform.基于 SpiNNaker 神经形态平台的用于监督分类的深度尖峰卷积神经网络的事件驱动实现。
Neural Netw. 2020 Jan;121:319-328. doi: 10.1016/j.neunet.2019.09.008. Epub 2019 Sep 24.
5
SNN: Time step reduction of spiking surrogate gradients for training energy efficient single-step spiking neural networks.SNN:用于训练节能单步脉冲神经网络的脉冲替代梯度的时间步长缩减
Neural Netw. 2023 Feb;159:208-219. doi: 10.1016/j.neunet.2022.12.008. Epub 2022 Dec 19.
6
Heterogeneity in Neuronal Dynamics Is Learned by Gradient Descent for Temporal Processing Tasks.神经元动力学的异质性通过梯度下降学习来处理时间处理任务。
Neural Comput. 2023 Mar 18;35(4):555-592. doi: 10.1162/neco_a_01571.
7
Supervised learning in spiking neural networks: A review of algorithms and evaluations.监督学习在尖峰神经网络中的应用:算法和评估综述。
Neural Netw. 2020 May;125:258-280. doi: 10.1016/j.neunet.2020.02.011. Epub 2020 Feb 25.
8
Supervised Learning Algorithm for Multilayer Spiking Neural Networks with Long-Term Memory Spike Response Model.监督学习算法在具有长时记忆尖峰响应模型的多层尖峰神经网络中的应用。
Comput Intell Neurosci. 2021 Nov 24;2021:8592824. doi: 10.1155/2021/8592824. eCollection 2021.
9
Overview of Spiking Neural Network Learning Approaches and Their Computational Complexities. Spike 神经网络学习方法概述及其计算复杂度。
Sensors (Basel). 2023 Mar 11;23(6):3037. doi: 10.3390/s23063037.
10
Self-adaptive STDP-based learning of a spiking neuron with nanocomposite memristive weights.基于自适应 STDP 的纳米复合忆阻权重 Spike 神经元学习。
Nanotechnology. 2020 Jan 17;31(4):045201. doi: 10.1088/1361-6528/ab4a6d. Epub 2019 Oct 2.

引用本文的文献

1
Learning-efficient spiking neural networks with multi-compartment spatio-temporal backpropagation.基于多房室时空反向传播的学习高效脉冲神经网络。
iScience. 2025 Jun 3;28(7):112491. doi: 10.1016/j.isci.2025.112491. eCollection 2025 Jul 18.
2
Learning delays through gradients and structure: emergence of spatiotemporal patterns in spiking neural networks.通过梯度和结构实现学习延迟:脉冲神经网络中时空模式的出现。
Front Comput Neurosci. 2024 Dec 20;18:1460309. doi: 10.3389/fncom.2024.1460309. eCollection 2024.
3
Structural influences on synaptic plasticity: The role of presynaptic connectivity in the emergence of E/I co-tuning.
结构对突触可塑性的影响:突触前连接在 E/I 共调出现中的作用。
PLoS Comput Biol. 2024 Oct 31;20(10):e1012510. doi: 10.1371/journal.pcbi.1012510. eCollection 2024 Oct.
4
Neuromorphic intermediate representation: A unified instruction set for interoperable brain-inspired computing.神经形态中间表示:一种用于可互操作的类脑计算的统一指令集。
Nat Commun. 2024 Sep 16;15(1):8122. doi: 10.1038/s41467-024-52259-9.
5
BayesianSpikeFusion: accelerating spiking neural network inference via Bayesian fusion of early prediction.贝叶斯脉冲融合:通过早期预测的贝叶斯融合加速脉冲神经网络推理
Front Neurosci. 2024 Aug 5;18:1420119. doi: 10.3389/fnins.2024.1420119. eCollection 2024.
6
Coincidence detection and integration behavior in spiking neural networks.脉冲神经网络中的巧合检测与整合行为
Cogn Neurodyn. 2024 Aug;18(4):1753-1765. doi: 10.1007/s11571-023-10038-0. Epub 2023 Dec 13.
7
LDD: High-Precision Training of Deep Spiking Neural Network Transformers Guided by an Artificial Neural Network.LDD:基于人工神经网络引导的深度脉冲神经网络变压器的高精度训练
Biomimetics (Basel). 2024 Jul 6;9(7):413. doi: 10.3390/biomimetics9070413.
8
Co-learning synaptic delays, weights and adaptation in spiking neural networks.在脉冲神经网络中协同学习突触延迟、权重和适应性。
Front Neurosci. 2024 Apr 12;18:1360300. doi: 10.3389/fnins.2024.1360300. eCollection 2024.
9
Sparse-firing regularization methods for spiking neural networks with time-to-first-spike coding.用于具有首次放电时间编码的脉冲神经网络的稀疏放电正则化方法。
Sci Rep. 2023 Dec 21;13(1):22897. doi: 10.1038/s41598-023-50201-5.
10
STCA-SNN: self-attention-based temporal-channel joint attention for spiking neural networks.STCA-SNN:用于脉冲神经网络的基于自注意力的时间-通道联合注意力
Front Neurosci. 2023 Nov 10;17:1261543. doi: 10.3389/fnins.2023.1261543. eCollection 2023.