Suppr超能文献

快速脉冲神经网络:通过量化人工神经网络转换实现的快速脉冲神经网络

Fast-SNN: Fast Spiking Neural Network by Converting Quantized ANN.

作者信息

Hu Yangfan, Zheng Qian, Jiang Xudong, Pan Gang

出版信息

IEEE Trans Pattern Anal Mach Intell. 2023 Dec;45(12):14546-14562. doi: 10.1109/TPAMI.2023.3275769. Epub 2023 Nov 3.

Abstract

Spiking neural networks (SNNs) have shown advantages in computation and energy efficiency over traditional artificial neural networks (ANNs) thanks to their event-driven representations. SNNs also replace weight multiplications in ANNs with additions, which are more energy-efficient and less computationally intensive. However, it remains a challenge to train deep SNNs due to the discrete spiking function. A popular approach to circumvent this challenge is ANN-to-SNN conversion. However, due to the quantization error and accumulating error, it often requires lots of time steps (high inference latency) to achieve high performance, which negates SNN's advantages. To this end, this paper proposes Fast-SNN that achieves high performance with low latency. We demonstrate the equivalent mapping between temporal quantization in SNNs and spatial quantization in ANNs, based on which the minimization of the quantization error is transferred to quantized ANN training. With the minimization of the quantization error, we show that the sequential error is the primary cause of the accumulating error, which is addressed by introducing a signed IF neuron model and a layer-wise fine-tuning mechanism. Our method achieves state-of-the-art performance and low latency on various computer vision tasks, including image classification, object detection, and semantic segmentation. Codes are available at: https://github.com/yangfan-hu/Fast-SNN.

摘要

由于其基于事件的表示方式,脉冲神经网络(SNN)在计算和能源效率方面已显示出优于传统人工神经网络(ANN)的优势。SNN还用加法取代了ANN中的权重乘法,这在能源效率上更高,计算强度也更低。然而,由于离散的脉冲函数,训练深度SNN仍然是一个挑战。一种规避这一挑战的常用方法是将ANN转换为SNN。然而,由于量化误差和累积误差,要实现高性能通常需要大量的时间步长(高推理延迟),这抵消了SNN的优势。为此,本文提出了Fast-SNN,它能以低延迟实现高性能。我们展示了SNN中的时间量化与ANN中的空间量化之间的等效映射,在此基础上,量化误差的最小化被转移到量化的ANN训练中。通过最小化量化误差,我们表明顺序误差是累积误差的主要原因,通过引入带符号的IF神经元模型和逐层微调机制来解决这一问题。我们的方法在各种计算机视觉任务上实现了最先进的性能和低延迟,包括图像分类、目标检测和语义分割。代码可在以下网址获取:https://github.com/yangfan-hu/Fast-SNN

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验