具有多隔室泄漏积分发放神经元的脉冲神经网络高效神经形态学习系统

Highly efficient neuromorphic learning system of spiking neural network with multi-compartment leaky integrate-and-fire neurons.

作者信息

Gao Tian, Deng Bin, Wang Jiang, Yi Guosheng

机构信息

School of Electrical and Information Engineering, Tianjin University, Tianjin, China.

出版信息

Front Neurosci. 2022 Sep 28;16:929644. doi: 10.3389/fnins.2022.929644. eCollection 2022.

Abstract

A spiking neural network (SNN) is considered a high-performance learning system that matches the digital circuits and presents higher efficiency due to the architecture and computation of spiking neurons. While implementing a SNN on a field-programmable gate array (FPGA), the gradient back-propagation through layers consumes a surprising number of resources. In this paper, we aim to realize an efficient architecture of SNN on the FPGA to reduce resource and power consumption. The multi-compartment leaky integrate-and-fire (MLIF) model is used to convert spike trains to the plateau potential in dendrites. We accumulate the potential in the apical dendrite during the training period. The average of this accumulative result is the dendritic plateau potential and is used to guide the updates of synaptic weights. Based on this architecture, the SNN is implemented on FPGA efficiently. In the implementation of a neuromorphic learning system, the shift multiplier (shift MUL) module and piecewise linear (PWL) algorithm are used to replace multipliers and complex nonlinear functions to match the digital circuits. The neuromorphic learning system is constructed with resources on FPGA without dataflow between on-chip and off-chip memories. Our neuromorphic learning system performs with higher resource utilization and power efficiency than previous on-chip learning systems.

摘要

脉冲神经网络(SNN)被认为是一种高性能的学习系统,它与数字电路相匹配,并且由于脉冲神经元的架构和计算方式而具有更高的效率。在现场可编程门阵列(FPGA)上实现SNN时,通过各层的梯度反向传播会消耗大量资源。在本文中,我们旨在在FPGA上实现一种高效的SNN架构,以减少资源和功耗。多房室泄漏积分发放(MLIF)模型用于将脉冲序列转换为树突中的平台电位。在训练期间,我们在顶端树突中累积电位。这个累积结果的平均值就是树突平台电位,并用于指导突触权重的更新。基于此架构,SNN在FPGA上得以高效实现。在实现神经形态学习系统时,移位乘法器(shift MUL)模块和分段线性(PWL)算法用于替代乘法器和复杂的非线性函数,以匹配数字电路。神经形态学习系统利用FPGA上的资源构建,片上和片外存储器之间没有数据流。我们的神经形态学习系统比以前的片上学习系统具有更高的资源利用率和功率效率。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b4bb/9554099/9e24fc78418e/fnins-16-929644-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索