IEEE Trans Neural Netw Learn Syst. 2015 Jan;26(1):51-61. doi: 10.1109/TNNLS.2014.2309939.
A traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using the algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving a better classification performance, when compared with other similar MTL approaches.
一种传统且直观的多任务多核学习(MT-MKL)方法是优化具有(部分)共享核函数的目标函数的和(因此是平均值),这允许任务之间的信息共享。我们指出,所得到的解对应于多目标优化问题(PF)的单个点,该问题同时考虑了多任务学习(MTL)问题中涉及的所有任务目标的优化。受此最后观察的启发,并认为前一种方法是启发式的,我们提出了一种新的支持向量机 MT-MKL 框架,该框架考虑了任务目标的一组隐含定义的圆锥组合。我们表明,解决我们的框架会沿着上述 PF 上的路径产生解,并且它将目标函数平均值的优化作为一个特例包含在内。通过一系列实验结果,我们使用推导的算法证明,与其他类似的 MTL 方法相比,该框架能够实现更好的分类性能。