School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China.
School of Computer Science and Engineering, Sichuan University of Science and Engineering, Yibin 643000, PR China.
Neural Netw. 2024 Jun;174:106244. doi: 10.1016/j.neunet.2024.106244. Epub 2024 Mar 15.
Spiking Neural Networks (SNNs) have become one of the most prominent next-generation computational models owing to their biological plausibility, low power consumption, and the potential for neuromorphic hardware implementation. Among the various methods for obtaining available SNNs, converting Artificial Neural Networks (ANNs) into SNNs is the most cost-effective approach. The early challenges in ANN-to-SNN conversion work revolved around the susceptibility of converted SNNs to conversion errors. Some recent endeavors have attempted to mitigate these conversion errors by altering the original ANNs. Despite their ability to enhance the accuracy of SNNs, these methods lack generality and cannot be directly applied to convert the majority of existing ANNs. In this paper, we present a framework named DNISNM for converting ANN to SNN, with the aim of addressing conversion errors arising from differences in the discreteness and asynchrony of network transmission between ANN and SNN. The DNISNM consists of two mechanisms, Data-based Neuronal Initialization (DNI) and Signed Neuron with Memory (SNM), designed to respectively address errors stemming from discreteness and asynchrony disparities. This framework requires no additional modifications to the original ANN and can result in SNNs with improved accuracy performance, simultaneously ensuring universality, high precision, and low inference latency. We verify it experimentally on challenging object recognition datasets, including CIFAR10, CIFAR100, and ImageNet-1k. Experimental results show that the SNN converted by our framework has very high accuracy even at extremely low latency.
尖峰神经网络(SNN)由于其生物合理性、低功耗和神经形态硬件实现的潜力,已成为最突出的下一代计算模型之一。在获得可用 SNN 的各种方法中,将人工神经网络(ANN)转换为 SNN 是最具成本效益的方法。早期的 ANN 到 SNN 转换工作的挑战集中在转换后的 SNN 容易受到转换错误的影响。最近的一些努力试图通过改变原始的 ANN 来减轻这些转换错误。尽管这些方法能够提高 SNN 的准确性,但它们缺乏通用性,不能直接应用于转换大多数现有的 ANN。在本文中,我们提出了一种名为 DNISNM 的框架,用于将 ANN 转换为 SNN,旨在解决由于 ANN 和 SNN 之间网络传输的离散性和异步性差异而产生的转换错误。DNISNM 由两个机制组成,即基于数据的神经元初始化(DNI)和具有记忆的符号神经元(SNM),旨在分别解决由于离散性和异步性差异引起的错误。该框架不需要对原始 ANN 进行任何额外的修改,并且可以得到具有改进准确性性能的 SNN,同时确保通用性、高精度和低推断延迟。我们在具有挑战性的对象识别数据集上进行了实验验证,包括 CIFAR10、CIFAR100 和 ImageNet-1k。实验结果表明,我们的框架转换的 SNN 即使在极低的延迟下也具有非常高的准确性。