Romero Enrique, Toppo Daniel
IEEE Trans Neural Netw. 2007 May;18(3):959-63. doi: 10.1109/TNN.2007.891656.
Support vector machines (SVMs) usually need a large number of support vectors to form their output Recently, several models have been proposed to build SVMs with a small number of basis functions, maintaining the property that their hidden-layer weights are a subset of the data (the support vectors). This property is also present in some algorithms for feedforward neural networks (FNNs) that construct the network sequentially, leading to sparse models where the number of hidden units can be explicitly controlled. An experimental study on several benchmark data sets, comparing SVMs and the aforementioned sequential FNNs, was carried out. The experiments were performed in the same conditions for all the models, and they can be seen as a comparison of SVMs and FNNs when both models are restricted to use similar hidden-layer weights. Accuracies were found to be very similar. Regarding the number of support vectors, sequential FNNs constructed models with less hidden units than standard SVMs and in the same range as "sparse" SVMs. Computational times were lower for SVMs.
支持向量机(SVM)通常需要大量支持向量来形成其输出。最近,已经提出了几种模型来构建具有少量基函数的支持向量机,同时保持其隐藏层权重是数据(支持向量)的一个子集这一特性。在一些用于前馈神经网络(FNN)的算法中也存在此特性,这些算法按顺序构建网络,从而得到可以明确控制隐藏单元数量的稀疏模型。针对几个基准数据集开展了一项实验研究,比较了支持向量机和上述顺序前馈神经网络。所有模型均在相同条件下进行实验,这些实验可视作在两个模型都被限制使用相似隐藏层权重时对支持向量机和前馈神经网络的比较。结果发现准确率非常相似。关于支持向量的数量,顺序前馈神经网络构建的模型比标准支持向量机具有更少的隐藏单元,且与“稀疏”支持向量机处于相同范围。支持向量机的计算时间更短。