Han Yina, Yang Yixin, Li Xuelong, Liu Qingyu, Ma Yuanliang
IEEE Trans Neural Netw Learn Syst. 2018 Jan 15. doi: 10.1109/TNNLS.2017.2785329.
This paper examines a matrix-regularized multiple kernel learning (MKL) technique based on a notion of (r,p) norms. For the problem of learning a linear combination in the support vector machine-based framework, model complexity is typically controlled using various regularization strategies on the combined kernel weights. Recent research has developed a generalized ℓp-norm MKL framework with tunable variable p(p≥1) to support controlled intrinsic sparsity. Unfortunately, this 1-D'' vector ℓp-norm hardly exploits potentially useful information on how the base kernels interact.'' To allow for higher order kernel-pair relationships, we extend the 1-D'' vector ℓp-MKL to the 2-D'' matrix (r,p) norms (1 ≤ r,p < ∞). We develop a new formulation and an efficient optimization strategy for (r,p)-MKL with guaranteed convergence. A theoretical analysis and experiments on seven UCI data sets shed light on the superiority of (r,p)-MKL over ℓp-MKL in various scenarios.
本文研究了一种基于(r,p)范数概念的矩阵正则化多核学习(MKL)技术。对于基于支持向量机框架的线性组合学习问题,通常通过对组合核权重采用各种正则化策略来控制模型复杂度。最近的研究开发了一种具有可调变量p(p≥1)的广义ℓp范数MKL框架,以支持可控的内在稀疏性。不幸的是,这种“一维”向量ℓp范数几乎没有利用关于基核如何“相互作用”的潜在有用信息。为了考虑更高阶的核对关系,我们将“一维”向量ℓp-MKL扩展到“二维”矩阵(r,p)范数(1≤r,p<∞)。我们为(r,p)-MKL开发了一种新的公式和一种保证收敛的高效优化策略。对七个UCI数据集的理论分析和实验揭示了(r,p)-MKL在各种情况下优于ℓp-MKL的优势。