Suppr超能文献

在脉冲神经网络中协同学习突触延迟、权重和适应性。

Co-learning synaptic delays, weights and adaptation in spiking neural networks.

作者信息

Deckers Lucas, Van Damme Laurens, Van Leekwijck Werner, Tsang Ing Jyh, Latré Steven

机构信息

IDLab, imec, University of Antwerp, Antwerp, Belgium.

出版信息

Front Neurosci. 2024 Apr 12;18:1360300. doi: 10.3389/fnins.2024.1360300. eCollection 2024.

Abstract

Spiking neural network (SNN) distinguish themselves from artificial neural network (ANN) because of their inherent temporal processing and spike-based computations, enabling a power-efficient implementation in neuromorphic hardware. In this study, we demonstrate that data processing with spiking neurons can be enhanced by co-learning the synaptic weights with two other biologically inspired neuronal features: (1) a set of parameters describing neuronal adaptation processes and (2) synaptic propagation delays. The former allows a spiking neuron to learn how to specifically react to incoming spikes based on its past. The trained adaptation parameters result in neuronal heterogeneity, which leads to a greater variety in available spike patterns and is also found in the brain. The latter enables to learn to explicitly correlate spike trains that are temporally distanced. Synaptic delays reflect the time an action potential requires to travel from one neuron to another. We show that each of the co-learned features separately leads to an improvement over the baseline SNN and that the combination of both leads to state-of-the-art SNN results on all speech recognition datasets investigated with a simple 2-hidden layer feed-forward network. Our SNN outperforms the benchmark ANN on the neuromorphic datasets (Spiking Heidelberg Digits and Spiking Speech Commands), even with fewer trainable parameters. On the 35-class Google Speech Commands dataset, our SNN also outperforms a GRU of similar size. Our study presents brain-inspired improvements in SNN that enable them to excel over an equivalent ANN of similar size on tasks with rich temporal dynamics.

摘要

脉冲神经网络(SNN)因其固有的时间处理能力和基于脉冲的计算方式,与人工神经网络(ANN)有所不同,这使得它能够在神经形态硬件中实现高效节能。在本研究中,我们证明通过与另外两个受生物启发的神经元特征共同学习突触权重,可以增强脉冲神经元的数据处理能力:(1)一组描述神经元适应过程的参数,以及(2)突触传播延迟。前者使脉冲神经元能够根据其过去的情况学习如何对传入的脉冲做出特定反应。经过训练的适应参数会导致神经元的异质性,这会带来更多样化的可用脉冲模式,在大脑中也能发现这种情况。后者能够学习明确关联在时间上有间隔的脉冲序列。突触延迟反映了动作电位从一个神经元传播到另一个神经元所需的时间。我们表明,每个共同学习的特征单独使用时都能比基线SNN有所改进,并且两者结合能在所有使用简单的2隐藏层前馈网络研究的语音识别数据集上取得领先的SNN结果。我们的SNN在神经形态数据集(脉冲海德堡数字和脉冲语音命令)上甚至在可训练参数更少的情况下也优于基准ANN。在35类谷歌语音命令数据集上,我们的SNN也优于类似规模的门控循环单元(GRU)。我们的研究展示了在SNN中受大脑启发的改进,使它们能够在具有丰富时间动态的任务上超越类似规模的等效ANN。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验