Suppr超能文献

尖峰深度残差网络

Spiking Deep Residual Networks.

作者信息

Hu Yangfan, Tang Huajin, Pan Gang

出版信息

IEEE Trans Neural Netw Learn Syst. 2023 Aug;34(8):5200-5205. doi: 10.1109/TNNLS.2021.3119238. Epub 2023 Aug 4.

Abstract

Spiking neural networks (SNNs) have received significant attention for their biological plausibility. SNNs theoretically have at least the same computational power as traditional artificial neural networks (ANNs). They possess the potential of achieving energy-efficient machine intelligence while keeping comparable performance to ANNs. However, it is still a big challenge to train a very deep SNN. In this brief, we propose an efficient approach to build deep SNNs. Residual network (ResNet) is considered a state-of-the-art and fundamental model among convolutional neural networks (CNNs). We employ the idea of converting a trained ResNet to a network of spiking neurons named spiking ResNet (S-ResNet). We propose a residual conversion model that appropriately scales continuous-valued activations in ANNs to match the firing rates in SNNs and a compensation mechanism to reduce the error caused by discretization. Experimental results demonstrate that our proposed method achieves state-of-the-art performance on CIFAR-10, CIFAR-100, and ImageNet 2012 with low latency. This work is the first time to build an asynchronous SNN deeper than 100 layers, with comparable performance to its original ANN.

摘要

脉冲神经网络(SNNs)因其生物学合理性而受到广泛关注。从理论上讲,SNNs至少具有与传统人工神经网络(ANNs)相同的计算能力。它们具有实现节能机器智能的潜力,同时保持与ANNs相当的性能。然而,训练一个非常深的SNN仍然是一个巨大的挑战。在本简报中,我们提出了一种构建深度SNN的有效方法。残差网络(ResNet)被认为是卷积神经网络(CNNs)中最先进的基础模型。我们采用将训练好的ResNet转换为脉冲神经元网络(称为脉冲ResNet,即S-ResNet)的想法。我们提出了一种残差转换模型,该模型适当地缩放ANNs中的连续值激活,以匹配SNNs中的 firing 率,并提出了一种补偿机制来减少离散化引起的误差。实验结果表明,我们提出的方法在CIFAR-10、CIFAR-100和ImageNet 2012上实现了低延迟的最优性能。这项工作首次构建了一个深度超过100层的异步SNN,其性能与其原始ANN相当。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验