Suppr超能文献

使用预充电膜电位和延迟评估的低延迟脉冲神经网络。

Low-Latency Spiking Neural Networks Using Pre-Charged Membrane Potential and Delayed Evaluation.

作者信息

Hwang Sungmin, Chang Jeesoo, Oh Min-Hye, Min Kyung Kyu, Jang Taejin, Park Kyungchul, Yu Junsu, Lee Jong-Ho, Park Byung-Gook

机构信息

Inter-university Semiconductor Research Center (ISRC) and Department of Electrical and Computer Engineering, Seoul National University, Seoul, South Korea.

出版信息

Front Neurosci. 2021 Feb 18;15:629000. doi: 10.3389/fnins.2021.629000. eCollection 2021.

Abstract

Spiking neural networks (SNNs) have attracted many researchers' interests due to its biological plausibility and event-driven characteristic. In particular, recently, many studies on high-performance SNNs comparable to the conventional analog-valued neural networks (ANNs) have been reported by converting weights trained from ANNs into SNNs. However, unlike ANNs, SNNs have an inherent latency that is required to reach the best performance because of differences in operations of neuron. In SNNs, not only spatial integration but also temporal integration exists, and the information is encoded by spike trains rather than values in ANNs. Therefore, it takes time to achieve a steady-state of the performance in SNNs. The latency is worse in deep networks and required to be reduced for the practical applications. In this work, we propose a pre-charged membrane potential () for the latency reduction in SNN. A variety of neural network applications (e.g., classification, autoencoder using MNIST and CIFAR-10 datasets) are trained and converted to SNNs to demonstrate the effect of the proposed approach. The latency of SNNs is successfully reduced without accuracy loss. In addition, we propose a delayed evaluation method (), by which the errors during the initial transient are discarded. The error spikes occurring in the initial transient is removed by , resulting in the further latency reduction. can be used in combination with for further latency reduction. Finally, we also show the advantages of the proposed methods in improving the number of spikes required to reach a steady-state of the performance in SNNs for energy-efficient computing.

摘要

脉冲神经网络(SNN)因其生物学合理性和事件驱动特性吸引了众多研究人员的关注。特别是近年来,通过将从传统模拟值神经网络(ANN)训练得到的权重转换为SNN,已经有许多关于与传统ANN性能相当的高性能SNN的研究报道。然而,与ANN不同,SNN由于神经元操作的差异,具有达到最佳性能所需的固有延迟。在SNN中,不仅存在空间整合,还存在时间整合,并且信息是通过脉冲序列而不是ANN中的值进行编码的。因此,SNN需要时间来达到性能的稳态。在深度网络中延迟更严重,为了实际应用需要降低延迟。在这项工作中,我们提出了一种预充电膜电位()来减少SNN中的延迟。通过训练各种神经网络应用(例如,使用MNIST和CIFAR - 10数据集的分类、自动编码器)并将其转换为SNN,以证明所提出方法的效果。成功降低了SNN的延迟而没有精度损失。此外,我们提出了一种延迟评估方法(),通过该方法丢弃初始瞬态期间的误差。通过 消除了初始瞬态中出现的误差尖峰,从而进一步降低了延迟。 可以与 结合使用以进一步降低延迟。最后,我们还展示了所提出方法在提高SNN达到性能稳态所需的脉冲数量以实现节能计算方面的优势。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/308f/7935527/a6e333aca66e/fnins-15-629000-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验