Suppr超能文献

用于无配对多视图聚类的选择性对比学习

Selective Contrastive Learning for Unpaired Multi-View Clustering.

作者信息

Xin Like, Yang Wanqi, Wang Lei, Yang Ming

出版信息

IEEE Trans Neural Netw Learn Syst. 2025 Jan;36(1):1749-1763. doi: 10.1109/TNNLS.2023.3329658. Epub 2025 Jan 7.

Abstract

In this article, we investigate a novel but insufficiently studied issue, unpaired multi-view clustering (UMC), where no paired observed samples exist in multi-view data, and the goal is to leverage the unpaired observed samples in all views for effective joint clustering. Existing methods in incomplete multi-view clustering usually utilize the sample pairing relationship between views to connect the views for joint clustering, but unfortunately, it is invalid for the UMC case. Therefore, we strive to mine a consistent cluster structure between views and propose an effective method, namely selective contrastive learning for UMC (scl-UMC), which needs to solve the following two challenging issues: 1) uncertain clustering structure under no supervision information and 2) uncertain pairing relationship between the clusters of views. Specifically, for the first one, we design an inner-view (IV) selective contrastive learning module to enhance the clustering structures and alleviate the uncertainty, which selects confident samples near the cluster centroids to perform contrastive learning in each view. For the second one, we design a cross-view (CV) selective contrastive learning module to first iteratively match the clusters between views and then tighten the matched clusters. Also, we utilize mutual information to further enhance the correlation of the matched clusters between views. Extensive experiments show the efficiency of our methods for UMC, compared with the state-of-the-art methods.

摘要

在本文中,我们研究了一个新颖但研究不足的问题,即无配对多视图聚类(UMC),在多视图数据中不存在配对的观测样本,目标是利用所有视图中的无配对观测样本进行有效的联合聚类。不完全多视图聚类中的现有方法通常利用视图之间的样本配对关系来连接视图以进行联合聚类,但不幸的是,这在UMC情况下是无效的。因此,我们努力挖掘视图之间一致的聚类结构,并提出一种有效的方法,即UMC的选择性对比学习(scl-UMC),这需要解决以下两个具有挑战性的问题:1)在无监督信息下不确定的聚类结构,以及2)视图聚类之间不确定的配对关系。具体而言,对于第一个问题,我们设计了一个视图内(IV)选择性对比学习模块来增强聚类结构并减轻不确定性,该模块在每个视图中选择靠近聚类中心的置信样本进行对比学习。对于第二个问题,我们设计了一个跨视图(CV)选择性对比学习模块,首先迭代地匹配视图之间的聚类,然后收紧匹配的聚类。此外,我们利用互信息进一步增强视图之间匹配聚类的相关性。与现有最先进的方法相比,大量实验表明了我们的方法在UMC上的有效性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验