Suppr超能文献

忆阻型漏电积分发放神经元与脉冲神经网络中的可学习直通估计器

Memristive leaky integrate-and-fire neuron and learnable straight-through estimator in spiking neural networks.

作者信息

Chen Tao, She Chunyan, Wang Lidan, Duan Shukai

机构信息

College of Artificial Intelligence, Southwest University, Chongqing, 400715 China.

National and Local Joint Engineering Research Center of Intelligent Transmission and Control Technology, Chongqing, 400715 China.

出版信息

Cogn Neurodyn. 2024 Oct;18(5):3075-3091. doi: 10.1007/s11571-024-10133-w. Epub 2024 Jun 20.

Abstract

Compared to artificial neural networks (ANNs), spiking neural networks (SNNs) present a more biologically plausible model of neural system dynamics. They rely on sparse binary spikes to communicate information and operate in an asynchronous, event-driven manner. Despite the high heterogeneity of the neural system at the neuronal level, most current SNNs employ the widely used leaky integrate-and-fire (LIF) neuron model, which assumes uniform membrane-related parameters throughout the entire network. This approach hampers the expressiveness of spiking neurons and restricts the diversity of neural dynamics. In this paper, we propose replacing the resistor in the LIF model with a discrete memristor to obtain the heterogeneous memristive LIF (MLIF) model. The memristance of the discrete memristor is determined by the voltage and flux at its terminals, leading to dynamic changes in the membrane time parameter of the MLIF model. SNNs composed of MLIF neurons can not only learn synaptic weights but also adaptively change membrane time parameters according to the membrane potential of the neuron, enhancing the learning ability and expression of SNNs. Furthermore, since the proper threshold of spiking neurons can improve the information capacity of SNNs, a learnable straight-through estimator (LSTE) is proposed. The LSTE, based on the straight-through estimator (STE) surrogate function, features a learnable threshold that facilitates the backward propagation of gradients through neurons firing spikes. Extensive experiments on several popular static and neuromorphic benchmark datasets demonstrate the effectiveness of the proposed MLIF and LSTE, especially on the DVS-CIFAR10 dataset, where we achieved the top-1 accuracy of 84.40 .

摘要

与人工神经网络(ANN)相比,脉冲神经网络(SNN)提出了一种更符合生物原理的神经系统动力学模型。它们依靠稀疏的二进制脉冲来传递信息,并以异步、事件驱动的方式运行。尽管神经系统在神经元层面具有高度异质性,但目前大多数SNN采用广泛使用的泄漏积分发放(LIF)神经元模型,该模型假设整个网络中与膜相关的参数是统一的。这种方法阻碍了脉冲神经元的表现力,并限制了神经动力学的多样性。在本文中,我们提议用一个离散忆阻器替换LIF模型中的电阻器,以获得异质忆阻LIF(MLIF)模型。离散忆阻器的忆阻值由其端子处的电压和磁通量决定,导致MLIF模型的膜时间参数发生动态变化。由MLIF神经元组成的SNN不仅可以学习突触权重,还能根据神经元的膜电位自适应地改变膜时间参数,增强了SNN的学习能力和表现力。此外,由于脉冲神经元的适当阈值可以提高SNN的信息容量,因此提出了一种可学习直通估计器(LSTE)。LSTE基于直通估计器(STE)替代函数,具有一个可学习的阈值,便于梯度通过发放脉冲的神经元进行反向传播。在几个流行的静态和神经形态基准数据集上进行的大量实验证明了所提出的MLIF和LSTE的有效性,特别是在DVS-CIFAR10数据集上,我们在该数据集上实现了84.40%的top-1准确率。

相似文献

8
CMOS LIF Neurons with Local Membrane Dynamic Biasing Based on Reciprocal Inhibition for Self-Oscillatory Neural Networks.
IEEE Trans Biomed Circuits Syst. 2025 Jun 25;PP. doi: 10.1109/TBCAS.2025.3583093.

本文引用的文献

1
A novel memristive neuron model and its energy characteristics.一种新型忆阻神经元模型及其能量特性。
Cogn Neurodyn. 2024 Aug;18(4):1989-2001. doi: 10.1007/s11571-024-10065-5. Epub 2024 Jan 28.
3
TCJA-SNN: Temporal-Channel Joint Attention for Spiking Neural Networks.TCJA-SNN:脉冲神经网络的时间-通道联合注意力机制
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):5112-5125. doi: 10.1109/TNNLS.2024.3377717. Epub 2025 Feb 28.
10
Tuning Convolutional Spiking Neural Network With Biologically Plausible Reward Propagation.基于生物合理奖励传播的卷积脉冲神经网络调优
IEEE Trans Neural Netw Learn Syst. 2022 Dec;33(12):7621-7631. doi: 10.1109/TNNLS.2021.3085966. Epub 2022 Nov 30.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验