Suppr超能文献

IDSNN:通过初始化和蒸馏实现高性能低延迟脉冲神经网络训练

IDSNN: Towards High-Performance and Low-Latency SNN Training via Initialization and Distillation.

作者信息

Fan Xiongfei, Zhang Hong, Zhang Yu

机构信息

State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou 310027, China.

Key Laboratory of Collaborative Sensing and Autonomous Unmanned Systems of Zhejiang Province, Hangzhou 310027, China.

出版信息

Biomimetics (Basel). 2023 Aug 18;8(4):375. doi: 10.3390/biomimetics8040375.

Abstract

Spiking neural networks (SNNs) are widely recognized for their biomimetic and efficient computing features. They utilize spikes to encode and transmit information. Despite the many advantages of SNNs, they suffer from the problems of low accuracy and large inference latency, which are, respectively, caused by the direct training and conversion from artificial neural network (ANN) training methods. Aiming to address these limitations, we propose a novel training pipeline (called IDSNN) based on parameter initialization and knowledge distillation, using ANN as a parameter source and teacher. IDSNN maximizes the knowledge extracted from ANNs and achieves competitive top-1 accuracy for CIFAR10 (94.22%) and CIFAR100 (75.41%) with low latency. More importantly, it can achieve 14× faster convergence speed than directly training SNNs under limited training resources, which demonstrates its practical value in applications.

摘要

脉冲神经网络(SNN)因其仿生和高效的计算特性而被广泛认可。它们利用脉冲来编码和传输信息。尽管SNN有诸多优点,但它们存在准确率低和推理延迟大的问题,这分别是由直接训练以及从人工神经网络(ANN)训练方法转换而来所导致的。为了解决这些局限性,我们提出了一种基于参数初始化和知识蒸馏的新型训练管道(称为IDSNN),将ANN用作参数源和教师网络。IDSNN最大化从ANN中提取的知识,并在低延迟情况下,在CIFAR10(94.22%)和CIFAR100(75.41%)数据集上实现了具有竞争力的top-1准确率。更重要的是,在有限的训练资源下,它比直接训练SNN的收敛速度快14倍,这证明了其在应用中的实用价值。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3625/10452895/feed48d059f3/biomimetics-08-00375-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验