Takahashi Norikazu, Guo Jun, Nishi Tetsuo
Department of Computer Science and Communication Engineering, Kyushu University, Fukuoka 819-0395, Japan.
IEEE Trans Neural Netw. 2008 Jun;19(6):971-82. doi: 10.1109/TNN.2007.915116.
Global convergence of the sequential minimal optimization (SMO) algorithm for support vector regression (SVR) is studied in this paper. Given l training samples, SVR is formulated as a convex quadratic programming (QP) problem with l pairs of variables. We prove that if two pairs of variables violating the optimality condition are chosen for update in each step and subproblems are solved in a certain way, then the SMO algorithm always stops within a finite number of iterations after finding an optimal solution. Also, efficient implementation techniques for the SMO algorithm are presented and compared experimentally with other SMO algorithms.
本文研究了支持向量回归(SVR)的序列最小优化(SMO)算法的全局收敛性。给定l个训练样本,SVR被表述为一个具有l对变量的凸二次规划(QP)问题。我们证明,如果在每一步中选择两对违反最优性条件的变量进行更新,并以某种方式求解子问题,那么SMO算法在找到最优解后总是会在有限次迭代内停止。此外,还提出了SMO算法的高效实现技术,并与其他SMO算法进行了实验比较。