IEEE Trans Neural Netw Learn Syst. 2017 Jun;28(6):1411-1424. doi: 10.1109/TNNLS.2016.2541339. Epub 2016 Mar 30.
The spiking neural network (SNN) is the third generation of neural networks and performs remarkably well in cognitive tasks, such as pattern recognition. The temporal neural encode mechanism found in biological hippocampus enables SNN to possess more powerful computation capability than networks with other encoding schemes. However, this temporal encoding approach requires neurons to process information serially on time, which reduces learning efficiency significantly. To keep the powerful computation capability of the temporal encoding mechanism and to overcome its low efficiency in the training of SNNs, a new training algorithm, the accurate synaptic-efficiency adjustment method is proposed in this paper. Inspired by the selective attention mechanism of the primate visual system, our algorithm selects only the target spike time as attention areas, and ignores voltage states of the untarget ones, resulting in a significant reduction of training time. Besides, our algorithm employs a cost function based on the voltage difference between the potential of the output neuron and the firing threshold of the SNN, instead of the traditional precise firing time distance. A normalized spike-timing-dependent-plasticity learning window is applied to assigning this error to different synapses for instructing their training. Comprehensive simulations are conducted to investigate the learning properties of our algorithm, with input neurons emitting both single spike and multiple spikes. Simulation results indicate that our algorithm possesses higher learning performance than the existing other methods and achieves the state-of-the-art efficiency in the training of SNN.
尖峰神经网络 (SNN) 是第三代神经网络,在认知任务(如模式识别)中表现出色。生物海马体中发现的时间神经编码机制使 SNN 具有比其他编码方案的网络更强大的计算能力。然而,这种时间编码方法要求神经元按时间顺序串行地处理信息,这大大降低了学习效率。为了保持时间编码机制的强大计算能力,并克服 SNN 训练中的低效率,本文提出了一种新的训练算法,即精确突触效率调整方法。受灵长类动物视觉系统选择性注意机制的启发,我们的算法仅选择目标尖峰时间作为注意区域,忽略非目标的电压状态,从而显著减少了训练时间。此外,我们的算法采用基于输出神经元的势能与 SNN 的点火阈值之间的电压差的代价函数,而不是传统的精确点火时间距离。应用归一化的尖峰时依赖可塑性学习窗口将此误差分配给不同的突触,以指导它们的训练。进行了全面的仿真,以研究我们算法的学习特性,输入神经元发出单个尖峰和多个尖峰。仿真结果表明,我们的算法比现有的其他方法具有更高的学习性能,并在 SNN 的训练中达到了最新的效率。