Suppr超能文献

使用平滑可分情况逼近法减少 SVM 分类器的支持向量数量。

Reducing the number of support vectors of SVM classifiers using the smoothed separable case approximation.

出版信息

IEEE Trans Neural Netw Learn Syst. 2012 Apr;23(4):682-8. doi: 10.1109/TNNLS.2012.2186314.

Abstract

In this brief, we propose a new method to reduce the number of support vectors of support vector machine (SVM) classifiers. We formulate the approximation of an SVM solution as a classification problem that is separable in the feature space. Due to the separability, the hard-margin SVM can be used to solve it. This approach, which we call the separable case approximation (SCA), is very similar to the cross-training algorithm explained in , which is inspired by editing algorithms . The norm of the weight vector achieved by SCA can, however, become arbitrarily large. For that reason, we propose an algorithm, called the smoothed SCA (SSCA), that additionally upper-bounds the weight vector of the pruned solution and, for the commonly used kernels, reduces the number of support vectors even more. The lower the chosen upper bound, the larger this extra reduction becomes. Upper-bounding the weight vector is important because it ensures numerical stability, reduces the time to find the pruned solution, and avoids overfitting during the approximation phase. On the examined datasets, SSCA drastically reduces the number of support vectors.

摘要

在本文中,我们提出了一种新的方法来减少支持向量机(SVM)分类器的支持向量数量。我们将 SVM 解的逼近表示为特征空间中可分离的分类问题。由于可分离性,可以使用硬边距 SVM 来解决这个问题。这种方法,我们称之为可分离情况逼近(SCA),与文献[1]中解释的交叉训练算法非常相似,它受到编辑算法的启发。然而,SCA 得到的权向量范数可能会变得任意大。因此,我们提出了一种算法,称为平滑 SCA(SSCA),它还限制了修剪解的权向量,并且对于常用的核函数,甚至可以进一步减少支持向量的数量。选择的上界越低,这种额外的减少就越大。对权向量进行上界约束很重要,因为它确保了数值稳定性,减少了找到修剪解的时间,并避免了近似阶段的过拟合。在检查的数据集上,SSCA 大大减少了支持向量的数量。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验