Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China.
Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China; Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China; National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China.
Neural Netw. 2022 Oct;154:68-77. doi: 10.1016/j.neunet.2022.06.036. Epub 2022 Jul 11.
Spiking neural networks (SNNs) transmit information through discrete spikes that perform well in processing spatial-temporal information. Owing to their nondifferentiable characteristic, difficulties persist in designing SNNs that deliver good performance. SNNs trained with backpropagation have recently exhibited impressive performance by using gradient approximation. However, their performance on complex tasks remains significantly inferior to that of deep neural networks. By taking inspiration from autapses in the brain that connect spiking neurons with a self-feedback connection, we apply adaptive time-delayed self-feedback to the membrane potential to regulate the precision of the spikes. We also strike a balance between the excitatory and inhibitory mechanisms of neurons to dynamically control the output of spiking neurons. By combining these two mechanisms, we propose a deep SNN with adaptive self-feedback and balanced excitatory and inhibitory neurons (BackEISNN). The results of experiments on several standard datasets show that the two modules not only accelerate the convergence of the network but also increase its accuracy. Our model achieved state-of-the-art performance on the MNIST, Fashion-MNIST, and N-MNIST datasets. The proposed BackEISNN also achieved remarkably good performance on the CIFAR10 dataset while using a relatively light structure that competes against state-of-the-art SNNs.
尖峰神经网络 (SNN) 通过离散尖峰传输信息,在处理时空信息方面表现出色。由于其不可微的特性,设计性能良好的 SNN 仍然存在困难。最近,使用梯度逼近的反向传播训练的 SNN 通过使用梯度逼近展示了令人印象深刻的性能。然而,它们在复杂任务上的性能仍然明显低于深度神经网络。受大脑中的自突触的启发,这些自突触将尖峰神经元与自反馈连接起来,我们将自适应时滞自反馈应用于膜电位,以调节尖峰的精度。我们还在神经元的兴奋和抑制机制之间取得平衡,以动态控制尖峰神经元的输出。通过结合这两种机制,我们提出了一种具有自适应自反馈和平衡兴奋抑制神经元的深度 SNN(BackEISNN)。在几个标准数据集上的实验结果表明,这两个模块不仅加速了网络的收敛,而且提高了其准确性。我们的模型在 MNIST、Fashion-MNIST 和 N-MNIST 数据集上达到了最先进的性能。所提出的 BackEISNN 在使用相对较轻的结构与最先进的 SNN 竞争的同时,在 CIFAR10 数据集上也取得了非常好的性能。