Pan Binbin, Lai Jianhuang, Shen Lixin
College of Mathematics and Computational Science, Shenzhen University, Shenzhen, China; School of Mathematics and Computational Science, Sun Yat-sen University, Guangzhou, China.
School of Information Science and Technology, Sun Yat-sen University, Guangzhou, China.
Neural Netw. 2014 Aug;56:22-34. doi: 10.1016/j.neunet.2014.04.003. Epub 2014 May 2.
In this paper, we propose a new form of regularization that is able to utilize the label information of a data set for learning kernels. The proposed regularization, referred to as ideal regularization, is a linear function of the kernel matrix to be learned. The ideal regularization allows us to develop efficient algorithms to exploit labels. Three applications of the ideal regularization are considered. Firstly, we use the ideal regularization to incorporate the labels into a standard kernel, making the resulting kernel more appropriate for learning tasks. Next, we employ the ideal regularization to learn a data-dependent kernel matrix from an initial kernel matrix (which contains prior similarity information, geometric structures, and labels of the data). Finally, we incorporate the ideal regularization to some state-of-the-art kernel learning problems. With this regularization, these learning problems can be formulated as simpler ones which permit more efficient solvers. Empirical results show that the ideal regularization exploits the labels effectively and efficiently.
在本文中,我们提出了一种新的正则化形式,它能够利用数据集的标签信息来学习核函数。所提出的正则化,称为理想正则化,是待学习的核矩阵的线性函数。理想正则化使我们能够开发利用标签的高效算法。考虑了理想正则化的三个应用。首先,我们使用理想正则化将标签纳入标准核函数,使得到的核函数更适合学习任务。接下来,我们使用理想正则化从初始核矩阵(其中包含数据的先验相似性信息、几何结构和标签)学习依赖于数据的核矩阵。最后,我们将理想正则化纳入一些最新的核学习问题。通过这种正则化,这些学习问题可以被表述为更简单的问题,从而允许使用更高效的求解器。实验结果表明,理想正则化能够有效且高效地利用标签。