Auer Peter, Burgsteiner Harald, Maass Wolfgang
Chair for Information Technology, University of Leoben, Franz-Josef-Strasse 18, A-8700 Leoben, Austria.
Neural Netw. 2008 Jun;21(5):786-95. doi: 10.1016/j.neunet.2007.12.036. Epub 2007 Dec 31.
One may argue that the simplest type of neural networks beyond a single perceptron is an array of several perceptrons in parallel. In spite of their simplicity, such circuits can compute any Boolean function if one views the majority of the binary perceptron outputs as the binary output of the parallel perceptron, and they are universal approximators for arbitrary continuous functions with values in [0,1] if one views the fraction of perceptrons that output 1 as the analog output of the parallel perceptron. Note that in contrast to the familiar model of a "multi-layer perceptron" the parallel perceptron that we consider here has just binary values as outputs of gates on the hidden layer. For a long time one has thought that there exists no competitive learning algorithm for these extremely simple neural networks, which also came to be known as committee machines. It is commonly assumed that one has to replace the hard threshold gates on the hidden layer by sigmoidal gates (or RBF-gates) and that one has to tune the weights on at least two successive layers in order to achieve satisfactory learning results for any class of neural networks that yield universal approximators. We show that this assumption is not true, by exhibiting a simple learning algorithm for parallel perceptrons - the parallel delta rule (p-delta rule). In contrast to backprop for multi-layer perceptrons, the p-delta rule only has to tune a single layer of weights, and it does not require the computation and communication of analog values with high precision. Reduced communication also distinguishes our new learning rule from other learning rules for parallel perceptrons such as MADALINE. Obviously these features make the p-delta rule attractive as a biologically more realistic alternative to backprop in biological neural circuits, but also for implementations in special purpose hardware. We show that the p-delta rule also implements gradient descent-with regard to a suitable error measure-although it does not require to compute derivatives. Furthermore it is shown through experiments on common real-world benchmark datasets that its performance is competitive with that of other learning approaches from neural networks and machine learning. It has recently been shown [Anthony, M. (2007). On the generalization error of fixed combinations of classifiers. Journal of Computer and System Sciences 73(5), 725-734; Anthony, M. (2004). On learning a function of perceptrons. In Proceedings of the 2004 IEEE international joint conference on neural networks (pp. 967-972): Vol. 2] that one can also prove quite satisfactory bounds for the generalization error of this new learning rule.
有人可能会说,除了单个感知器之外,最简单的神经网络类型是由几个感知器并行组成的阵列。尽管它们很简单,但如果将大多数二元感知器的输出视为并行感知器的二元输出,这样的电路可以计算任何布尔函数;如果将输出为1的感知器的比例视为并行感知器的模拟输出,那么它们对于取值在[0,1]之间的任意连续函数都是通用逼近器。需要注意的是,与常见的“多层感知器”模型不同,我们这里考虑的并行感知器在隐藏层上的门的输出只有二元值。长期以来,人们一直认为对于这些极其简单的神经网络不存在竞争性学习算法,这些神经网络后来也被称为委员会机器。通常认为必须用Sigmoid门(或径向基函数门)替换隐藏层上的硬阈值门,并且必须在至少两个连续层上调整权重,以便对于任何能产生通用逼近器的神经网络类别都能获得令人满意的学习结果。我们通过展示一种用于并行感知器的简单学习算法——并行增量规则(p - delta规则),表明这个假设是不正确的。与多层感知器的反向传播不同,p - delta规则只需要调整单层权重,并且不需要高精度地计算和通信模拟值。减少的通信量也使我们的新学习规则与其他用于并行感知器的学习规则(如MADALINE)有所区别。显然,这些特性使得p - delta规则作为生物神经回路中比反向传播在生物学上更现实的替代方案,以及在专用硬件中的实现都具有吸引力。我们表明,p - delta规则在合适的误差度量下也实现了梯度下降,尽管它不需要计算导数。此外,通过在常见的真实世界基准数据集上的实验表明,它的性能与神经网络和机器学习中的其他学习方法具有竞争力。最近有研究表明[安东尼,M.(2007年)。关于分类器固定组合的泛化误差。《计算机与系统科学杂志》73(5),725 - 734;安东尼,M.(2004年)。关于学习感知器的函数。在2004年IEEE国际神经网络联合会议论文集(第967 - 972页):第2卷],对于这个新学习规则的泛化误差也可以证明相当令人满意的界。