Suppr超能文献

DelGrad:用于在脉冲神经形态硬件上训练延迟和权重的精确基于事件的梯度。

DelGrad: exact event-based gradients for training delays and weights on spiking neuromorphic hardware.

作者信息

Göltz Julian, Weber Jimmy, Kriener Laura, Billaudelle Sebastian, Lake Peter, Schemmel Johannes, Payvand Melika, Petrovici Mihai A

机构信息

Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany.

Department of Physiology, University of Bern, Bern, Switzerland.

出版信息

Nat Commun. 2025 Sep 9;16(1):8245. doi: 10.1038/s41467-025-63120-y.

Abstract

Spiking neural networks (SNNs) inherently rely on the timing of signals for representing and processing information. Augmenting SNNs with trainable transmission delays, alongside synaptic weights, has recently shown to increase their accuracy and parameter efficiency. However, existing training methods to optimize such networks rely on discrete time, approximate gradients, and full access to internal variables such as membrane potentials. This limits their precision, efficiency, and suitability for neuromorphic hardware due to increased memory and I/O-bandwidth demands. Here, we propose DelGrad, an analytical, event-based training method to compute exact loss gradients for both weights and delays. Grounded purely in spike timing, DelGrad eliminates the need to track any other variables to optimize SNNs. We showcase this key advantage by implementing DelGrad on the BrainScaleS-2 mixed-signal neuromorphic platform. For the first time, we experimentally demonstrate the parameter efficiency, accuracy benefits, and stabilizing effect of adding delays to SNNs on noisy hardware. DelGrad thus provides a new way for training SNNs with delays on neuromorphic substrates, with substantial improvements over previous results.

摘要

脉冲神经网络(SNN)本质上依赖信号的时间来表示和处理信息。最近研究表明,除了突触权重之外,为SNN增加可训练的传输延迟能够提高其准确性和参数效率。然而,现有的用于优化此类网络的训练方法依赖离散时间、近似梯度以及对诸如膜电位等内部变量的完全访问。由于内存和I/O带宽需求增加,这限制了它们的精度、效率以及对神经形态硬件的适用性。在此,我们提出了DelGrad,一种基于事件的解析训练方法,用于计算权重和延迟的精确损失梯度。DelGrad完全基于脉冲时间,无需跟踪任何其他变量即可优化SNN。我们通过在BrainScaleS-2混合信号神经形态平台上实现DelGrad展示了这一关键优势。首次通过实验证明了在有噪声的硬件上为SNN添加延迟所带来的参数效率、准确性提升以及稳定效果。因此,DelGrad为在神经形态基板上训练带延迟的SNN提供了一种新方法,相较于之前的结果有显著改进。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8317/12420821/6f28f985a2eb/41467_2025_63120_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验