Rotermund David, Pawelzik Klaus R
Institute for Theoretical Physics, University of Bremen, Bremen, Germany.
Front Comput Neurosci. 2019 Aug 13;13:55. doi: 10.3389/fncom.2019.00055. eCollection 2019.
Artificial neural networks (ANNs) are important building blocks in technical applications. They rely on noiseless continuous signals in stark contrast to the discrete action potentials stochastically exchanged among the neurons in real brains. We propose to bridge this gap with Spike-by-Spike (SbS) networks which represent a compromise between non-spiking and spiking versions of generative models. What is missing, however, are algorithms for finding weight sets that would optimize the output performances of deep SbS networks with many layers. Here, a learning rule for feed-forward SbS networks is derived. The properties of this approach are investigated and its functionality is demonstrated by simulations. In particular, a Deep Convolutional SbS network for classifying handwritten digits achieves a classification performance of roughly 99.3% on the MNIST test data when the learning rule is applied together with an optimizer. Thereby it approaches the benchmark results of ANNs without extensive parameter optimization. We envision this learning rule for SBS networks to provide a new basis for research in neuroscience and for technical applications, especially when they become implemented on specialized computational hardware.
人工神经网络(ANNs)是技术应用中的重要组成部分。与真实大脑中神经元之间随机交换的离散动作电位形成鲜明对比的是,它们依赖于无噪声的连续信号。我们建议通过逐脉冲(SbS)网络来弥合这一差距,该网络代表了生成模型的非脉冲和脉冲版本之间的一种折衷。然而,缺少的是用于找到能够优化多层深度SbS网络输出性能的权重集的算法。在此,推导了前馈SbS网络的学习规则。研究了该方法的特性,并通过模拟展示了其功能。特别是,当学习规则与优化器一起应用时,用于对手写数字进行分类的深度卷积SbS网络在MNIST测试数据上实现了约99.3%的分类性能。由此,它在无需广泛参数优化的情况下接近了人工神经网络的基准结果。我们设想这种用于SBS网络的学习规则将为神经科学研究和技术应用提供新的基础,特别是当它们在专用计算硬件上实现时。