Tang Long, Yan Pengfei, Tian Yingjie, Pardalos Pano M
School of Artificial Intelligence, Nanjing University of Information Science & Technology, Nanjing, 210044, China; Research Institute of Talent Big Data, Nanjing University of Information Science & Technology, Nanjing, 210044, China.
School of Artificial Intelligence, Nanjing University of Information Science & Technology, Nanjing, 210044, China.
Neural Netw. 2025 Jan;181:106763. doi: 10.1016/j.neunet.2024.106763. Epub 2024 Oct 2.
Unlike traditional supervised classification, complementary label learning (CLL) operates under a weak supervision framework, where each sample is annotated by excluding several incorrect labels, known as complementary labels (CLs). Despite reducing the labeling burden, CLL always suffers a decline in performance due to the weakened supervised information. To overcome such limitations, in this study, a multi-view fusion and self-adaptive label discovery based CLL method (MVSLDCLL) is proposed. The self-adaptive label discovery strategy leverages graph-based semi-supervised learning to capture the label distribution of each training sample as a convex combination of all its potential labels. The multi-view fusion module is designed to adapt to various views of feature representations. In specific, it minimizes the discrepancies of label projections between pairwise views, aligning with the consensus principle. Additionally, a straightforward mechanism inspired by a teamwork analogy is proposed to incorporate view-discrepancy for each sample. Experimental results demonstrate that MVSLDCLL learns more discriminative label distribution and achieves significantly higher accuracies compared to state-of-the-art CLL methods. Ablation study has also been performed to validate the effectiveness of both the self-adaptive label discovery strategy and the multi-view fusion module.
与传统的监督分类不同,互补标签学习(CLL)在弱监督框架下运行,其中每个样本通过排除几个不正确的标签(称为互补标签(CL))进行标注。尽管减轻了标注负担,但由于监督信息减弱,CLL的性能总是会下降。为了克服这些限制,本研究提出了一种基于多视图融合和自适应标签发现的CLL方法(MVSLDCLL)。自适应标签发现策略利用基于图的半监督学习来捕获每个训练样本的标签分布,作为其所有潜在标签的凸组合。多视图融合模块旨在适应特征表示的各种视图。具体而言,它最小化了成对视图之间标签投影的差异,符合一致性原则。此外,还提出了一种受团队合作类比启发的简单机制,以纳入每个样本的视图差异。实验结果表明,与现有CLL方法相比,MVSLDCLL学习到更具判别力的标签分布,并取得了显著更高的准确率。还进行了消融研究,以验证自适应标签发现策略和多视图融合模块的有效性。