Akoa François Bertrand
Planning Department, AES Sonel, Douala, Littoral 4077, Cameroon.
IEEE Trans Neural Netw. 2008 Nov;19(11):1854-72. doi: 10.1109/TNN.2008.2003299.
Today, decomposition methods are one of the most popular methods for training support vector machines (SVMs). With the use of kernels that do not satisfy Mercer's condition, new techniques must be designed to handle nonpositive-semidefinite kernels resulting to this choice. In this work we incorporate difference of convex (DC functions) optimization techniques into decomposition methods to tackle this difficulty. The new approach needs no problem modification and we show that the only use of a truncated DC algorithms (DCAs) in the decomposition scheme produces a sufficient decrease of the objective function at each iteration. Thanks to this property, an asymptotic convergence proof of the new algorithm is produced without any blockwise convexity assumption on the objective function. We also investigate a working set selection rule using second-order information for sequential minimal optimization (SMO)-type decomposition in the spirit of DC optimization. Numerical results show the robustness and the efficiency of the new methods compared with state-of-the-art software.
如今,分解方法是训练支持向量机(SVM)最流行的方法之一。由于使用了不满足 Mercer 条件的核函数,必须设计新的技术来处理由此产生的非半正定核。在这项工作中,我们将凸函数差(DC 函数)优化技术纳入分解方法来解决这一难题。新方法无需对问题进行修改,并且我们证明在分解方案中仅使用截断 DC 算法(DCA)就能在每次迭代时使目标函数充分下降。得益于这一特性,在不对目标函数做任何逐块凸性假设的情况下,给出了新算法的渐近收敛证明。我们还本着 DC 优化的精神,研究了一种在顺序最小优化(SMO)型分解中使用二阶信息的工作集选择规则。数值结果表明,与现有软件相比,新方法具有鲁棒性和高效性。