Fernandez-Delgado Manuel, Ribeiro Jorge, Cernadas Eva, Ameneiro Senén Barro
Intelligent Systems Group, Gipuzkoa 20018, Spain.
IEEE Trans Neural Netw. 2011 Nov;22(11):1837-48. doi: 10.1109/TNN.2011.2169086. Epub 2011 Oct 6.
Parallel perceptrons (PPs) are very simple and efficient committee machines (a single layer of perceptrons with threshold activation functions and binary outputs, and a majority voting decision scheme), which nevertheless behave as universal approximators. The parallel delta (P-Delta) rule is an effective training algorithm, which, following the ideas of statistical learning theory used by the support vector machine (SVM), raises its generalization ability by maximizing the difference between the perceptron activations for the training patterns and the activation threshold (which corresponds to the separating hyperplane). In this paper, we propose an analytical closed-form expression to calculate the PPs' weights for classification tasks. Our method, called Direct Parallel Perceptrons (DPPs), directly calculates (without iterations) the weights using the training patterns and their desired outputs, without any search or numeric function optimization. The calculated weights globally minimize an error function which simultaneously takes into account the training error and the classification margin. Given its analytical and noniterative nature, DPPs are computationally much more efficient than other related approaches (P-Delta and SVM), and its computational complexity is linear in the input dimensionality. Therefore, DPPs are very appealing, in terms of time complexity and memory consumption, and are very easy to use for high-dimensional classification tasks. On real benchmark datasets with two and multiple classes, DPPs are competitive with SVM and other approaches but they also allow online learning and, as opposed to most of them, have no tunable parameters.
并行感知器(PPs)是非常简单且高效的委员会机器(由具有阈值激活函数和二进制输出的单层感知器以及多数投票决策方案组成),但其表现如同通用逼近器。并行增量(P-Delta)规则是一种有效的训练算法,它遵循支持向量机(SVM)所使用的统计学习理论的思想,通过最大化训练模式的感知器激活与激活阈值(对应于分离超平面)之间的差异来提高其泛化能力。在本文中,我们提出了一种解析闭式表达式来计算用于分类任务的PPs权重。我们的方法称为直接并行感知器(DPPs),它直接使用训练模式及其期望输出(无需迭代)来计算权重,无需任何搜索或数值函数优化。计算出的权重全局最小化一个同时考虑训练误差和分类间隔的误差函数。鉴于其解析性和非迭代性质,DPPs在计算上比其他相关方法(P-Delta和SVM)高效得多,并且其计算复杂度在输入维度上是线性的。因此,就时间复杂度和内存消耗而言,DPPs非常有吸引力,并且非常易于用于高维分类任务。在具有两类和多类的真实基准数据集上,DPPs与SVM和其他方法具有竞争力,但它们还允许在线学习,并且与大多数方法不同,没有可调整的参数。