Jia Shuncheng, Zhang Tielin, Cheng Xiang, Liu Hongxing, Xu Bo
Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences (CASIA), Beijing, China.
School of Artificial Intelligence, University of Chinese Academy of Sciences (UCAS), Beijing, China.
Front Neurosci. 2021 Mar 12;15:654786. doi: 10.3389/fnins.2021.654786. eCollection 2021.
Different types of dynamics and plasticity principles found through natural neural networks have been well-applied on Spiking neural networks (SNNs) because of their biologically-plausible efficient and robust computations compared to their counterpart deep neural networks (DNNs). Here, we further propose a special Neuronal-plasticity and Reward-propagation improved Recurrent SNN (NRR-SNN). The historically-related adaptive threshold with two channels is highlighted as important neuronal plasticity for increasing the neuronal dynamics, and then global labels instead of errors are used as a reward for the paralleling gradient propagation. Besides, a recurrent loop with proper sparseness is designed for robust computation. Higher accuracy and stronger robust computation are achieved on two sequential datasets (i.e., TIDigits and TIMIT datasets), which to some extent, shows the power of the proposed NRR-SNN with biologically-plausible improvements.
通过自然神经网络发现的不同类型的动力学和可塑性原理,由于与深度神经网络(DNN)相比具有生物学上合理的高效和鲁棒计算能力,已在脉冲神经网络(SNN)中得到很好的应用。在此,我们进一步提出了一种特殊的神经元可塑性和奖励传播改进的递归SNN(NRR-SNN)。具有两个通道的与历史相关的自适应阈值被突出显示为增加神经元动力学的重要神经元可塑性,然后使用全局标签而不是误差作为并行梯度传播的奖励。此外,设计了一个具有适当稀疏性的递归回路用于鲁棒计算。在两个顺序数据集(即TIDigits和TIMIT数据集)上实现了更高的准确性和更强的鲁棒计算能力,这在一定程度上显示了所提出的具有生物学上合理改进的NRR-SNN的能力。