Suppr超能文献

通过对人工神经网络中的残差误差进行显式建模来转换高性能和低延迟的脉冲神经网络。

Converting High-Performance and Low-Latency SNNs Through Explicit Modeling of Residual Error in ANNs.

作者信息

Huang Zhipeng, Ding Jianhao, Pan Zhiyu, Li Haoran, Fang Ying, Yu Zhaofei, Liu Jian K

出版信息

IEEE Trans Neural Netw Learn Syst. 2025 Sep;36(9):16788-16802. doi: 10.1109/TNNLS.2025.3567567.

Abstract

Spiking neural networks (SNNs) have garnered interest due to their energy efficiency and superior effectiveness on neuromorphic chips compared with traditional artificial neural networks (ANNs). One of the mainstream approaches to implementing deep SNNs is the ANN-SNN conversion, which integrates the efficient training strategy of ANNs with the energy-saving potential and fast inference capability of SNNs. However, under extremely low-latency conditions, the existing conversion theory suggests that the problem of SNNs' neurons firing more or fewer spikes within each layer, i.e., residual error, leads to a performance gap in the converted SNNs compared with the original ANNs. This severely limits the possibility of the practical application of SNNs on delay-sensitive edge devices. Existing conversion methods addressing this problem usually involve modifying the state of the conversion spiking neurons. However, these methods do not consider their adaptability and compatibility with neuromorphic chips. We propose a new approach based on explicit modeling of residual errors as additive noise. The noise is incorporated into the activation function of the source ANN, effectively reducing the impact of residual error on SNN performance. Our experiments on the CIFAR10/100 and Tiny-ImageNet datasets verify that our approach exceeds the prevailing ANN-SNN conversion methods and directly trained SNNs concerning accuracy and the required time steps. Overall, our method provides new ideas for improving SNN performance under ultralow-latency conditions and is expected to promote practical neuromorphic hardware applications for further development. The code for our NQ framework is available at https://github.com/hzp2022/ANN2SNN_NQ.

摘要

与传统人工神经网络(ANN)相比,脉冲神经网络(SNN)因其能量效率和在神经形态芯片上的卓越效能而备受关注。实现深度SNN的主流方法之一是ANN-SNN转换,该方法将ANN高效的训练策略与SNN的节能潜力和快速推理能力相结合。然而,在极低延迟条件下,现有的转换理论表明,SNN各层内神经元发放脉冲数量过多或过少的问题,即残余误差,会导致转换后的SNN与原始ANN相比存在性能差距。这严重限制了SNN在对延迟敏感的边缘设备上实际应用的可能性。解决此问题的现有转换方法通常涉及修改转换脉冲神经元的状态。然而,这些方法没有考虑它们对神经形态芯片的适应性和兼容性。我们提出了一种基于将残余误差显式建模为加性噪声的新方法。该噪声被纳入源ANN的激活函数中,有效降低了残余误差对SNN性能的影响。我们在CIFAR10/100和Tiny-ImageNet数据集上的实验验证了我们的方法在准确性和所需时间步长方面超过了主流的ANN-SNN转换方法以及直接训练的SNN。总体而言,我们的方法为在超低延迟条件下提高SNN性能提供了新思路,有望推动神经形态硬件的实际应用进一步发展。我们的NQ框架代码可在https://github.com/hzp2022/ANN2SNN_NQ获取。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验