Wang Zhehui, Gu Xiaozhe, Goh Rick Siow Mong, Zhou Joey Tianyi, Luo Tao
IEEE Trans Neural Netw Learn Syst. 2024 Mar;35(3):3689-3701. doi: 10.1109/TNNLS.2022.3195918. Epub 2024 Feb 29.
Spiking neural networks (SNNs) have advantages in latency and energy efficiency over traditional artificial neural networks (ANNs) due to their event-driven computation mechanism and the replacement of energy-consuming weight multiplication with addition. However, to achieve high accuracy, it usually requires long spike trains to ensure accuracy, usually more than 1000 time steps. This offsets the computation efficiency brought by SNNs because a longer spike train means a larger number of operations and larger latency. In this article, we propose a radix-encoded SNN, which has ultrashort spike trains. Specifically, it is able to use less than six time steps to achieve even higher accuracy than its traditional counterpart. We also develop a method to fit our radix encoding technique into the ANN-to-SNN conversion approach so that we can train radix-encoded SNNs more efficiently on mature platforms and hardware. Experiments show that our radix encoding can achieve 25× improvement in latency and 1.7% improvement in accuracy compared to the state-of-the-art method using the VGG-16 network on the CIFAR-10 dataset.
脉冲神经网络(SNN)由于其事件驱动的计算机制以及用加法替代耗能的权重乘法,在延迟和能量效率方面优于传统人工神经网络(ANN)。然而,为了实现高精度,通常需要长脉冲序列来确保准确性,通常超过1000个时间步长。这抵消了SNN带来的计算效率,因为更长的脉冲序列意味着更多的操作和更大的延迟。在本文中,我们提出了一种基数编码的SNN,它具有超短脉冲序列。具体来说,它能够使用少于六个时间步长来实现比传统对应物更高的准确性。我们还开发了一种方法,将我们的基数编码技术融入到ANN到SNN的转换方法中,以便我们可以在成熟的平台和硬件上更有效地训练基数编码的SNN。实验表明,在CIFAR-10数据集上,与使用VGG-16网络的最先进方法相比,我们的基数编码在延迟方面可实现25倍的提升,在准确性方面可实现1.7%的提升。