Suppr超能文献

基于早期终止的训练加速,用于高能效脉冲神经网络(SNN)处理器设计。

Early Termination Based Training Acceleration for an Energy-Efficient SNN Processor Design.

作者信息

Choi Sunghyun, Lew Dongwoo, Park Jongsun

出版信息

IEEE Trans Biomed Circuits Syst. 2022 Jun;16(3):442-455. doi: 10.1109/TBCAS.2022.3181808. Epub 2022 Jul 12.

Abstract

In this paper, we present a novel early termination based training acceleration technique for temporal coding based spiking neural network (SNN) processor design. The proposed early termination scheme can efficiently identify the non-contributing training images during the training's feedforward process, and it skips the rest of the processes to save training energy and time. A metric to evaluate each input image's contribution to training has been developed, and it is compared with pre-determined threshold to decide whether to skip the rest of the training process. For the threshold selection, an adaptive threshold calculation method is presented to increase the computation skip ratio without sacrificing accuracy. Timestep splitting approach is also employed to allow more frequent early termination in split timesteps, thus leading to more computation savings. The proposed early termination and timestep splitting techniques achieve 51.21/42.31/93.53/30.36% reduction of synaptic operations and 86.06/64.63/90.82/49.14% reduction of feedforward timestep for the training process on MNIST/Fashion-MNIST/ETH-80/EMNIST-Letters dataset, respectively. The hardware implementation of the proposed SNN processor using 28 nm CMOS process shows that the SNN processor achieves the training energy saving of 61.76/31.88% and computation cycle reduction of 69.10/36.26% on MNIST/Fashion-MNIST dataset, respectively.

摘要

在本文中,我们提出了一种新颖的基于早期终止的训练加速技术,用于基于时间编码的脉冲神经网络(SNN)处理器设计。所提出的早期终止方案能够在训练的前馈过程中有效地识别对训练无贡献的图像,并跳过其余过程以节省训练能量和时间。我们开发了一种度量标准来评估每个输入图像对训练的贡献,并将其与预先确定的阈值进行比较,以决定是否跳过其余的训练过程。对于阈值选择,我们提出了一种自适应阈值计算方法,以在不牺牲准确性的情况下提高计算跳过率。还采用了时间步长分割方法,以便在分割的时间步长中更频繁地进行早期终止,从而节省更多计算量。所提出的早期终止和时间步长分割技术在MNIST/ Fashion-MNIST/ ETH-80/ EMNIST-Letters数据集上的训练过程中,分别实现了突触操作减少51.21/42.31/93.53/30.36%和前馈时间步长减少86.06/64.63/90.82/49.14%。使用28纳米CMOS工艺对所提出的SNN处理器进行硬件实现表明,该SNN处理器在MNIST/ Fashion-MNIST数据集上分别实现了61.76/31.88%的训练能量节省和69.10/36.26%的计算周期减少。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验