School of Future Technology, University of Chinese Academy of Sciences, Beijing, China; Brain-Inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences (CAS), Beijing, China.
Brain-Inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences (CAS), Beijing, China.
Neural Netw. 2023 Aug;165:799-808. doi: 10.1016/j.neunet.2023.06.019. Epub 2023 Jun 22.
The backpropagation algorithm has promoted the rapid development of deep learning, but it relies on a large amount of labeled data and still has a large gap with how humans learn. The human brain can quickly learn various conceptual knowledge in a self-organized and unsupervised manner, accomplished through coordinating various learning rules and structures in the human brain. Spike-timing-dependent plasticity (STDP) is a general learning rule in the brain, but spiking neural networks (SNNs) trained with STDP alone is inefficient and perform poorly. In this paper, taking inspiration from short-term synaptic plasticity, we design an adaptive synaptic filter and introduce the adaptive spiking threshold as the neuron plasticity to enrich the representation ability of SNNs. We also introduce an adaptive lateral inhibitory connection to adjust the spikes balance dynamically to help the network learn richer features. To speed up and stabilize the training of unsupervised spiking neural networks, we design a samples temporal batch STDP (STB-STDP), which updates weights based on multiple samples and moments. By integrating the above three adaptive mechanisms and STB-STDP, our model greatly accelerates the training of unsupervised spiking neural networks and improves the performance of unsupervised SNNs on complex tasks. Our model achieves the current state-of-the-art performance of unsupervised STDP-based SNNs in the MNIST and FashionMNIST datasets. Further, we tested on the more complex CIFAR10 dataset, and the results fully illustrate the superiority of our algorithm. Our model is also the first work to apply unsupervised STDP-based SNNs to CIFAR10. At the same time, in the small-sample learning scenario, it will far exceed the supervised ANN using the same structure.
反向传播算法推动了深度学习的快速发展,但它依赖于大量标记数据,与人类学习仍有很大差距。人类大脑可以快速以自组织和无监督的方式学习各种概念知识,这是通过协调大脑中的各种学习规则和结构来实现的。尖峰时间依赖可塑性(STDP)是大脑中的一种通用学习规则,但仅使用 STDP 训练的尖峰神经网络(SNN)效率低下,性能不佳。在本文中,我们从短期突触可塑性中汲取灵感,设计了一种自适应突触滤波器,并引入自适应尖峰阈值作为神经元可塑性,以丰富 SNN 的表示能力。我们还引入了自适应侧向抑制连接,以动态调整尖峰平衡,帮助网络学习更丰富的特征。为了加快和稳定无监督尖峰神经网络的训练,我们设计了一种样本时间批量 STDP(STB-STDP),它基于多个样本和时刻更新权重。通过整合上述三种自适应机制和 STB-STDP,我们的模型大大加快了无监督尖峰神经网络的训练速度,并提高了无监督 SNN 在复杂任务上的性能。我们的模型在 MNIST 和 FashionMNIST 数据集上实现了无监督基于 STDP 的 SNN 的当前最先进性能。此外,我们在更复杂的 CIFAR10 数据集上进行了测试,结果充分说明了我们算法的优越性。我们的模型也是第一个将无监督基于 STDP 的 SNN 应用于 CIFAR10 的工作。同时,在小样本学习场景中,它将远远超过使用相同结构的监督 ANN。