Göltz Julian, Weber Jimmy, Kriener Laura, Billaudelle Sebastian, Lake Peter, Schemmel Johannes, Payvand Melika, Petrovici Mihai A
Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany.
Department of Physiology, University of Bern, Bern, Switzerland.
Nat Commun. 2025 Sep 9;16(1):8245. doi: 10.1038/s41467-025-63120-y.
Spiking neural networks (SNNs) inherently rely on the timing of signals for representing and processing information. Augmenting SNNs with trainable transmission delays, alongside synaptic weights, has recently shown to increase their accuracy and parameter efficiency. However, existing training methods to optimize such networks rely on discrete time, approximate gradients, and full access to internal variables such as membrane potentials. This limits their precision, efficiency, and suitability for neuromorphic hardware due to increased memory and I/O-bandwidth demands. Here, we propose DelGrad, an analytical, event-based training method to compute exact loss gradients for both weights and delays. Grounded purely in spike timing, DelGrad eliminates the need to track any other variables to optimize SNNs. We showcase this key advantage by implementing DelGrad on the BrainScaleS-2 mixed-signal neuromorphic platform. For the first time, we experimentally demonstrate the parameter efficiency, accuracy benefits, and stabilizing effect of adding delays to SNNs on noisy hardware. DelGrad thus provides a new way for training SNNs with delays on neuromorphic substrates, with substantial improvements over previous results.
脉冲神经网络(SNN)本质上依赖信号的时间来表示和处理信息。最近研究表明,除了突触权重之外,为SNN增加可训练的传输延迟能够提高其准确性和参数效率。然而,现有的用于优化此类网络的训练方法依赖离散时间、近似梯度以及对诸如膜电位等内部变量的完全访问。由于内存和I/O带宽需求增加,这限制了它们的精度、效率以及对神经形态硬件的适用性。在此,我们提出了DelGrad,一种基于事件的解析训练方法,用于计算权重和延迟的精确损失梯度。DelGrad完全基于脉冲时间,无需跟踪任何其他变量即可优化SNN。我们通过在BrainScaleS-2混合信号神经形态平台上实现DelGrad展示了这一关键优势。首次通过实验证明了在有噪声的硬件上为SNN添加延迟所带来的参数效率、准确性提升以及稳定效果。因此,DelGrad为在神经形态基板上训练带延迟的SNN提供了一种新方法,相较于之前的结果有显著改进。