Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taipei, 10607, Taiwan.
Neural Netw. 2012 Jan;25(1):57-69. doi: 10.1016/j.neunet.2011.06.019. Epub 2011 Jul 14.
Based on the reduced SVM, we propose a multi-view algorithm, two-teachers-one-student, for semi-supervised learning. With RSVM, different from typical multi-view methods, reduced sets suggest different views in the represented kernel feature space rather than in the input space. No label information is necessary when we select reduced sets, and this makes applying RSVM to SSL possible. Our algorithm blends the concepts of co-training and consensus training. Through co-training, the classifiers generated by two views can "teach" the third classifier from the remaining view to learn, and this process is performed for each choice of teachers-student combination. By consensus training, predictions from more than one view can give us higher confidence for labeling unlabeled data. The results show that the proposed 2T1S achieves high cross-validation accuracy, even compared to the training with all the label information available.
基于简化的支持向量机,我们提出了一种多视图算法,即“两位老师带一位学生”,用于半监督学习。与典型的多视图方法不同,简化集在表示核特征空间中而不是在输入空间中建议不同的视图。选择简化集时不需要标签信息,这使得 RSVM 可以应用于 SSL。我们的算法融合了协同训练和共识训练的概念。通过协同训练,两个视图生成的分类器可以“教”第三个分类器从剩余视图中学习,这个过程对于每个教师-学生组合的选择都要进行。通过共识训练,来自多个视图的预测可以为我们对未标记数据进行标记提供更高的置信度。结果表明,所提出的 2T1S 实现了很高的交叉验证准确性,甚至与使用所有可用标签信息的训练相比也是如此。