Huang Huajuan, Mei Yanbin, Wei Xiuxi, Zhou Yongquan
College of Artificial Intelligence, Guangxi Minzu University, Nanning, 530006, China.
School of Computer Science & Technology, China University of Mining and Technology, Xuzhou, 221116, China.
Sci Rep. 2025 May 9;15(1):16154. doi: 10.1038/s41598-025-00895-6.
Multi-view Clustering (MVC) has gained significant attention in recent years due to its ability to explore consensus information from multiple perspectives. However, traditional MVC methods face two major challenges: (1) how to alleviate the representation degeneration caused by the process of achieving multi-view consensus information, and (2) how to learn discriminative representations with clustering-friendly structures. Most existing MVC methods overlook the importance of inter-cluster separability. To address these issues, we propose a novel Contrastive Learning-based Dual Contrast Mechanism Deep Multi-view Clustering Network. Specifically, we first introduce view-specific autoencoders to extract latent features for each individual view. Then, we obtain consensus information across views through global feature fusion, measuring the pairwise representation discrepancy by maximizing the consistency between the view-specific representations and global feature representations. Subsequently, we design an adaptive weighted mechanism that can automatically enhance the useful views in feature fusion while suppressing unreliable views, effectively mitigating the representation degeneration issue. Furthermore, within the Contrastive Learning framework, we introduce a Dynamic Cluster Diffusion (DC) module that maximizes the distance between different clusters, thus enhancing the separability of the clusters and obtaining a clustering-friendly discriminative representation. Extensive experiments on multiple datasets demonstrate that our method not only achieves state-of-the-art clustering performance but also produces clustering structures with better separability.
近年来,多视图聚类(MVC)因其能够从多个角度探索共识信息而受到广泛关注。然而,传统的MVC方法面临两个主要挑战:(1)如何减轻在获取多视图共识信息过程中导致的表示退化,以及(2)如何学习具有聚类友好结构的判别性表示。大多数现有的MVC方法忽略了簇间可分离性的重要性。为了解决这些问题,我们提出了一种基于对比学习的双对比机制深度多视图聚类网络。具体来说,我们首先引入视图特定的自动编码器来为每个单独的视图提取潜在特征。然后,我们通过全局特征融合获得跨视图的共识信息,通过最大化视图特定表示与全局特征表示之间的一致性来测量成对表示差异。随后,我们设计了一种自适应加权机制,该机制可以在抑制不可靠视图的同时自动增强特征融合中的有用视图,有效缓解表示退化问题。此外,在对比学习框架内,我们引入了一个动态簇扩散(DC)模块,该模块最大化不同簇之间的距离,从而增强簇的可分离性并获得聚类友好的判别性表示。在多个数据集上进行的大量实验表明,我们的方法不仅实现了当前最优的聚类性能,还产生了具有更好可分离性的聚类结构。