Chang C C, Lin C J
Department of Computer Science and Information Engineering, National Taiwan University, Taipei 106, Taiwan.
Neural Comput. 2001 Sep;13(9):2119-47. doi: 10.1162/089976601750399335.
The nu-support vector machine (nu-SVM) for classification proposed by Schölkopf, Smola, Williamson, and Bartlett (2000) has the advantage of using a parameter nu on controlling the number of support vectors. In this article, we investigate the relation between nu-SVM and C-SVM in detail. We show that in general they are two different problems with the same optimal solution set. Hence, we may expect that many numerical aspects of solving them are similar. However, compared to regular C-SVM, the formulation of nu-SVM is more complicated, so up to now there have been no effective methods for solving large-scale nu-SVM. We propose a decomposition method for nu-SVM that is competitive with existing methods for C-SVM. We also discuss the behavior of nu-SVM by some numerical experiments.
由施尔科普夫、斯莫拉、威廉姆森和巴特利特(2000年)提出的用于分类的ν-支持向量机(nu-SVM)具有使用参数ν来控制支持向量数量的优点。在本文中,我们详细研究了nu-SVM与C-SVM之间的关系。我们表明,一般来说它们是具有相同最优解集的两个不同问题。因此,我们可以预期求解它们的许多数值方面是相似的。然而,与常规的C-SVM相比,nu-SVM的公式更为复杂,所以到目前为止还没有求解大规模nu-SVM的有效方法。我们提出了一种用于nu-SVM的分解方法,该方法与现有的C-SVM方法具有竞争力。我们还通过一些数值实验讨论了nu-SVM的性能。