用于分类和分割的具有低延迟和高性能的脉冲神经网络的架构设计与训练优化。

The architecture design and training optimization of spiking neural network with low-latency and high-performance for classification and segmentation.

作者信息

Ye Wujian, Chen Shaozhen, Liu Haoxian, Liu Yijun, Chen Yuehai, Cui Youfeng, Lin Wenjie

机构信息

School of integrated Circuits, Guangdong University of Technology, Guangzhou, Guangdong, 510006, China.

School of Information Engineering, Guangdong University of Technology, Guangzhou, Guangdong, 510006, China.

出版信息

Neural Netw. 2025 Jun 21;191:107790. doi: 10.1016/j.neunet.2025.107790.

Abstract

Spiking Neural Networks (SNNs) are the new third generation of bio-mimetic neural networks suitable for large-scale parallel computation due to its advantages of low power consumption and low latency. However, most of the training algorithms and network architectures of existing SNNs are designed on the basis of traditional Artificial Neural Networks (ANNs), which require a large number of time-steps for inference and have high requirements for membrane potential storage space, resulting in large latency and consuming large memory resources. In this paper, we propose a spiking neurons-shared ResNet network (Spiking-NSNet) for image classification and a spiking semantic segmentation network (Spiking-SSegNet) for image segmentation based on our designed neurons-shared architecture and hybrid attenuation strategy. Firstly, a novel Neurons-Shared Block (NS-Block) are designed for locally sharing membrane potential parameters of neurons to realize the reduction of parameters and accelerate the inference speed. Secondly, different attenuation factor are set for neurons in different NS-Blocks, so that different neurons have different activities and are more in line with the biological dynamic characteristics. Finally, a temporal correlated(TC) loss algorithm is designed to optimize the SNN direct training process for faster convergence and better performance. Based on above improvements, the Spiking-NSNet and the Spiking-SSegNet are designed by using the architectures of ResNet and UNet, respectively, and are trained by realizing the pre-training and transfer learning of SNNs for the first time. The experiments show that the proposed Spking-NSNet obtains high recognition accuracies of 94.65 %, 77.4 % and 79 % with lower latency of four time steps on static dataset of CIFAR-10, CIFAR100 and dynamic dataset of DVS-CIFAR-10. The mIoUs of designed Spiking-SSegNet can achieve 43.2 % and 53.4 % on static dataset of PASCAL-VOC2012 and dynamic dataset of DDD17. Thus, under the recognition and segmentation tasks, the proposed methods can effectively reduce the number of time steps and model parameters for model's training and inference comparable to that of traditional ANN models.

摘要

脉冲神经网络(SNNs)是新一代的仿生神经网络,由于其低功耗和低延迟的优势,适用于大规模并行计算。然而,现有SNNs的大多数训练算法和网络架构都是基于传统人工神经网络(ANNs)设计的,这需要大量的时间步长进行推理,并且对膜电位存储空间有很高的要求,导致延迟大且消耗大量内存资源。在本文中,我们基于设计的神经元共享架构和混合衰减策略,提出了一种用于图像分类的脉冲神经元共享ResNet网络(Spiking-NSNet)和一种用于图像分割的脉冲语义分割网络(Spiking-SSegNet)。首先,设计了一种新颖的神经元共享块(NS-Block),用于局部共享神经元的膜电位参数,以实现参数减少并加快推理速度。其次,为不同的NS-Block中的神经元设置不同的衰减因子,使不同的神经元具有不同的活动,更符合生物动态特性。最后,设计了一种时间相关(TC)损失算法来优化SNN直接训练过程,以实现更快的收敛和更好的性能。基于上述改进,分别使用ResNet和UNet的架构设计了Spiking-NSNet和Spiking-SSegNet,并首次通过实现SNNs的预训练和迁移学习进行训练。实验表明,所提出的Spking-NSNet在CIFAR-10静态数据集、CIFAR100和DVS-CIFAR-10动态数据集上,以四个时间步长的较低延迟获得了94.65%、77.4%和79%的高识别准确率。所设计的Spiking-SSegNet在PASCAL-VOC2012静态数据集和DDD17动态数据集上的平均交并比分别可达43.2%和53.4%。因此,在识别和分割任务下,所提出的方法可以有效地减少模型训练和推理的时间步长和模型参数数量,与传统ANN模型相当。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索