Kim Youngeun, Li Yuhang, Moitra Abhishek, Yin Ruokai, Panda Priyadarshini
Department of Electrical Engineering, Yale University, New Haven, CT, United States.
Front Neurosci. 2023 Jul 31;17:1230002. doi: 10.3389/fnins.2023.1230002. eCollection 2023.
Spiking Neural Networks (SNNs) have gained increasing attention as energy-efficient neural networks owing to their binary and asynchronous computation. However, their non-linear activation, that is Leaky-Integrate-and-Fire (LIF) neuron, requires additional memory to store a membrane voltage to capture the temporal dynamics of spikes. Although the required memory cost for LIF neurons significantly increases as the input dimension goes larger, a technique to reduce memory for LIF neurons has not been explored so far. To address this, we propose a simple and effective solution, EfficientLIF-Net, which shares the LIF neurons across different layers and channels. Our EfficientLIF-Net achieves comparable accuracy with the standard SNNs while bringing up to ~4.3× forward memory efficiency and ~21.9× backward memory efficiency for LIF neurons. We conduct experiments on various datasets including CIFAR10, CIFAR100, TinyImageNet, ImageNet-100, and N-Caltech101. Furthermore, we show that our approach also offers advantages on Human Activity Recognition (HAR) datasets, which heavily rely on temporal information. The code has been released at https://github.com/Intelligent-Computing-Lab-Yale/EfficientLIF-Net.
脉冲神经网络(SNNs)因其二进制和异步计算方式,作为节能型神经网络受到了越来越多的关注。然而,其非线性激活函数,即泄漏积分发放(LIF)神经元,需要额外的内存来存储膜电压,以捕捉脉冲的时间动态。尽管随着输入维度的增大,LIF神经元所需的内存成本会显著增加,但目前尚未探索出一种减少LIF神经元内存的技术。为了解决这个问题,我们提出了一种简单有效的解决方案——高效LIF网络(EfficientLIF-Net),它在不同层和通道之间共享LIF神经元。我们的高效LIF网络在实现与标准SNNs相当准确率的同时,为LIF神经元带来了高达约4.3倍的前向内存效率和约21.9倍的反向内存效率。我们在包括CIFAR10、CIFAR100、TinyImageNet、ImageNet-100和N-Caltech101在内的各种数据集上进行了实验。此外,我们还表明,我们的方法在严重依赖时间信息的人类活动识别(HAR)数据集上也具有优势。代码已发布在https://github.com/Intelligent-Computing-Lab-Yale/EfficientLIF-Net 。