Tang Jingjing, Tian Yingjie, Zhang Peng, Liu Xiaohui
IEEE Trans Neural Netw Learn Syst. 2018 Aug;29(8):3463-3477. doi: 10.1109/TNNLS.2017.2728139. Epub 2017 Aug 11.
Multiview learning (MVL), by exploiting the complementary information among multiple feature sets, can improve the performance of many existing learning tasks. Support vector machine (SVM)-based models have been frequently used for MVL. A typical SVM-based MVL model is SVM-2K, which extends SVM for MVL by using the distance minimization version of kernel canonical correlation analysis. However, SVM-2K cannot fully unleash the power of the complementary information among different feature views. Recently, a framework of learning using privileged information (LUPI) has been proposed to model data with complementary information. Motivated by LUPI, we propose a new multiview privileged SVM model, multi-view privileged SVM model (PSVM-2V), for MVL. This brings a new perspective that extends LUPI to MVL. The optimization of PSVM-2V can be solved by the classical quadratic programming solver. We theoretically analyze the performance of PSVM-2V from the viewpoints of the consensus principle, the generalization error bound, and the SVM-2K learning model. Experimental results on 95 binary data sets demonstrate the effectiveness of the proposed method.
多视图学习(MVL)通过利用多个特征集之间的互补信息,可以提高许多现有学习任务的性能。基于支持向量机(SVM)的模型已被频繁用于多视图学习。一种典型的基于SVM的多视图学习模型是SVM - 2K,它通过使用核典型相关分析的距离最小化版本将SVM扩展用于多视图学习。然而,SVM - 2K不能充分释放不同特征视图之间互补信息的力量。最近,一种使用特权信息学习(LUPI)的框架被提出来对具有互补信息的数据进行建模。受LUPI的启发,我们提出了一种用于多视图学习的新的多视图特权SVM模型,即多视图特权SVM模型(PSVM - 2V)。这带来了一个将LUPI扩展到多视图学习的新视角。PSVM - 2V的优化可以通过经典的二次规划求解器来解决。我们从一致性原理、泛化误差界和SVM - 2K学习模型的角度对PSVM - 2V的性能进行了理论分析。在95个二元数据集上的实验结果证明了所提方法的有效性。