Zhou Chenlin, Zhang Han, Yu Liutao, Ye Yumin, Zhou Zhaokun, Huang Liwei, Ma Zhengyu, Fan Xiaopeng, Zhou Huihui, Tian Yonghong
Peng Cheng Laboratory, Shenzhen, China.
Faculty of Computing, Harbin Institute of Technology, Harbin, China.
Front Neurosci. 2024 Jul 31;18:1383844. doi: 10.3389/fnins.2024.1383844. eCollection 2024.
Spiking neural networks (SNNs) offer a promising energy-efficient alternative to artificial neural networks (ANNs), in virtue of their high biological plausibility, rich spatial-temporal dynamics, and event-driven computation. The direct training algorithms based on the surrogate gradient method provide sufficient flexibility to design novel SNN architectures and explore the spatial-temporal dynamics of SNNs. According to previous studies, the performance of models is highly dependent on their sizes. Recently, direct training deep SNNs have achieved great progress on both neuromorphic datasets and large-scale static datasets. Notably, transformer-based SNNs show comparable performance with their ANN counterparts. In this paper, we provide a new perspective to summarize the theories and methods for training deep SNNs with high performance in a systematic and comprehensive way, including theory fundamentals, spiking neuron models, advanced SNN models and residual architectures, software frameworks and neuromorphic hardware, applications, and future trends.
脉冲神经网络(SNNs)凭借其高度的生物合理性、丰富的时空动态特性以及事件驱动计算,为人工神经网络(ANNs)提供了一种颇具前景的节能替代方案。基于替代梯度法的直接训练算法为设计新型SNN架构和探索SNN的时空动态特性提供了足够的灵活性。根据先前的研究,模型的性能高度依赖于其规模。最近,直接训练深度SNN在神经形态数据集和大规模静态数据集上都取得了巨大进展。值得注意的是,基于Transformer的SNN与其对应的ANN表现出相当的性能。在本文中,我们提供了一个新的视角,以系统且全面的方式总结训练高性能深度SNN的理论和方法,包括理论基础、脉冲神经元模型、先进的SNN模型和残差架构、软件框架和神经形态硬件、应用以及未来趋势。