Chen Pai-Hsuen, Fan Rong-En, Lin Chih-Jen
IEEE Trans Neural Netw. 2006 Jul;17(4):893-908. doi: 10.1109/TNN.2006.875973.
Decomposition methods are currently one of the major methods for training support vector machines. They vary mainly according to different working set selections. Existing implementations and analysis usually consider some specific selection rules. This paper studies sequential minimal optimization type decomposition methods under a general and flexible way of choosing the two-element working set. The main results include: 1) a simple asymptotic convergence proof, 2) a general explanation of the shrinking and caching techniques, and 3) the linear convergence of the methods. Extensions to some support vector machine variants are also discussed.
分解方法目前是训练支持向量机的主要方法之一。它们主要根据不同的工作集选择而有所不同。现有的实现和分析通常考虑一些特定的选择规则。本文以一种通用且灵活的方式选择二元工作集,研究了序列最小优化类型的分解方法。主要结果包括:1)一个简单的渐近收敛证明;2)对收缩和缓存技术的一般解释;3)这些方法的线性收敛性。还讨论了对一些支持向量机变体的扩展。