Suppr超能文献

ALBSNN:具有精度损失估计器的超低延迟自适应局部二值脉冲神经网络

ALBSNN: ultra-low latency adaptive local binary spiking neural network with accuracy loss estimator.

作者信息

Pei Yijian, Xu Changqing, Wu Zili, Liu Yi, Yang Yintang

机构信息

Guangzhou Institute of Technology, Xidian University, Xi'an, China.

School of Microelectronics, Xidian University, Xi'an, China.

出版信息

Front Neurosci. 2023 Sep 13;17:1225871. doi: 10.3389/fnins.2023.1225871. eCollection 2023.

Abstract

Spiking neural network (SNN) is a brain-inspired model with more spatio-temporal information processing capacity and computational energy efficiency. However, with the increasing depth of SNNs, the memory problem caused by the weights of SNNs has gradually attracted attention. In this study, we propose an ultra-low latency adaptive local binary spiking neural network (ALBSNN) with accuracy loss estimators, which dynamically selects the network layers to be binarized to ensure a balance between quantization degree and classification accuracy by evaluating the error caused by the binarized weights during the network learning process. At the same time, to accelerate the training speed of the network, the global average pooling (GAP) layer is introduced to replace the fully connected layers by combining convolution and pooling. Finally, to further reduce the error caused by the binary weight, we propose binary weight optimization (BWO), which updates the overall weight by directly adjusting the binary weight. This method further reduces the loss of the network that reaches the training bottleneck. The combination of the above methods balances the network's quantization and recognition ability, enabling the network to maintain the recognition capability equivalent to the full precision network and reduce the storage space by more than 20%. So, SNNs can use a small number of time steps to obtain better recognition accuracy. In the extreme case of using only a one-time step, we still can achieve 93.39, 92.12, and 69.55% testing accuracy on three traditional static datasets, Fashion- MNIST, CIFAR-10, and CIFAR-100, respectively. At the same time, we evaluate our method on neuromorphic N-MNIST, CIFAR10-DVS, and IBM DVS128 Gesture datasets and achieve advanced accuracy in SNN with binary weights. Our network has greater advantages in terms of storage resources and training time.

摘要

脉冲神经网络(SNN)是一种受大脑启发的模型,具有更强的时空信息处理能力和计算能源效率。然而,随着SNN深度的增加,由SNN权重引起的内存问题逐渐受到关注。在本研究中,我们提出了一种具有精度损失估计器的超低延迟自适应局部二值脉冲神经网络(ALBSNN),它通过评估网络学习过程中二值化权重所导致的误差,动态选择要二值化的网络层,以确保量化程度和分类精度之间的平衡。同时,为了加快网络的训练速度,引入全局平均池化(GAP)层,通过结合卷积和池化来取代全连接层。最后,为了进一步减少二值权重引起的误差,我们提出了二值权重优化(BWO),通过直接调整二值权重来更新整体权重。该方法进一步降低了达到训练瓶颈的网络损失。上述方法的结合平衡了网络的量化和识别能力,使网络能够保持与全精度网络相当的识别能力,并将存储空间减少20%以上。因此,SNN可以使用较少的时间步长获得更好的识别精度。在仅使用一个时间步长的极端情况下,我们在Fashion-MNIST、CIFAR-10和CIFAR-100这三个传统静态数据集上仍分别实现了93.39%、92.12%和69.55%的测试精度。同时,我们在神经形态N-MNIST、CIFAR10-DVS和IBM DVS128 Gesture数据集上评估了我们的方法,并在具有二值权重的SNN中取得了先进的精度。我们的网络在存储资源和训练时间方面具有更大的优势。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3105/10525310/b4e0cadfe37c/fnins-17-1225871-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验