School of Computer Science and Technology, China University of Mining and Technology, China.
Neural Netw. 2013 Oct;46:172-82. doi: 10.1016/j.neunet.2013.05.003. Epub 2013 May 24.
Some multiple kernel learning (MKL) models are usually solved by utilizing the alternating optimization method where one alternately solves SVMs in the dual and updates kernel weights. Since the dual and primal optimization can achieve the same aim, it is valuable in exploring how to perform Lp norm MKL in the primal. In this paper, we propose an Lp norm multiple kernel learning algorithm in the primal where we resort to the alternating optimization method: one cycle for solving SVMs in the primal by using the preconditioned conjugate gradient method and other cycle for learning the kernel weights. It is interesting to note that the kernel weights in our method can obtain analytical solutions. Most importantly, the proposed method is well suited for the manifold regularization framework in the primal since solving LapSVMs in the primal is much more effective than solving LapSVMs in the dual. In addition, we also carry out theoretical analysis for multiple kernel learning in the primal in terms of the empirical Rademacher complexity. It is found that optimizing the empirical Rademacher complexity may obtain a type of kernel weights. The experiments on some datasets are carried out to demonstrate the feasibility and effectiveness of the proposed method.
一些多核学习(MKL)模型通常利用交替优化方法来解决,该方法交替在对偶中求解 SVM 并更新核权重。由于对偶和原问题优化可以达到相同的目标,因此探索如何在原问题中进行 Lp 范数 MKL 是很有价值的。在本文中,我们提出了一种在原问题中的 Lp 范数多核学习算法,我们采用交替优化方法:一个循环用于使用预条件共轭梯度法在原问题中求解 SVM,另一个循环用于学习核权重。有趣的是,我们方法中的核权重可以获得解析解。最重要的是,该方法非常适合原问题中的流形正则化框架,因为在原问题中求解 LapSVM 比在对偶中求解 LapSVM 更为有效。此外,我们还从经验 Rademacher 复杂度的角度对原问题中的多核学习进行了理论分析。结果表明,优化经验 Rademacher 复杂度可能会得到一类核权重。在一些数据集上的实验证明了该方法的可行性和有效性。