Suppr超能文献

上下文相关保持的多视图特征图聚类。

Contextual Correlation Preserving Multiview Featured Graph Clustering.

出版信息

IEEE Trans Cybern. 2020 Oct;50(10):4318-4331. doi: 10.1109/TCYB.2019.2926431. Epub 2019 Jul 19.

Abstract

Graph clustering, which aims at discovering sets of related vertices in graph-structured data, plays a crucial role in various applications, such as social community detection and biological module discovery. With the huge increase in the volume of data in recent years, graph clustering is used in an increasing number of real-life scenarios. However, the classical and state-of-the-art methods, which consider only single-view features or a single vector concatenating features from different views and neglect the contextual correlation between pairwise features, are insufficient for the task, as features that characterize vertices in a graph are usually from multiple views and the contextual correlation between pairwise features may influence the cluster preference for vertices. To address this challenging problem, we introduce in this paper, a novel graph clustering model, dubbed contextual correlation preserving multiview featured graph clustering (CCPMVFGC) for discovering clusters in graphs with multiview vertex features. Unlike most of the aforementioned approaches, CCPMVFGC is capable of learning a shared latent space from multiview features as the cluster preference for each vertex and making use of this latent space to model the inter-relationship between pairwise vertices. CCPMVFGC uses an effective method to compute the degree of contextual correlation between pairwise vertex features and utilizes view-wise latent space representing the feature-cluster preference to model the computed correlation. Thus, the cluster preference learned by CCPMVFGC is jointly inferred by multiview features, view-wise correlations of pairwise features, and the graph topology. Accordingly, we propose a unified objective function for CCPMVFGC and develop an iterative strategy to solve the formulated optimization problem. We also provide the theoretical analysis of the proposed model, including convergence proof and computational complexity analysis. In our experiments, we extensively compare the proposed CCPMVFGC with both classical and state-of-the-art graph clustering methods on eight standard graph datasets (six multiview and two single-view datasets). The results show that CCPMVFGC achieves competitive performance on all eight datasets, which validates the effectiveness of the proposed model.

摘要

图聚类旨在发现图结构数据中相关顶点的集合,在各种应用中起着至关重要的作用,例如社交社区检测和生物模块发现。随着近年来数据量的巨大增长,图聚类越来越多地应用于现实生活场景中。然而,经典和最先进的方法只考虑单视图特征或单个向量串联来自不同视图的特征,忽略了两两特征之间的上下文相关性,对于该任务来说是不够的,因为特征通常来自多个视图,并且两两特征之间的上下文相关性可能会影响顶点的聚类偏好。为了解决这个具有挑战性的问题,我们在本文中引入了一种新的图聚类模型,称为上下文相关保留多视图特征图聚类(CCPMVFGC),用于发现具有多视图顶点特征的图中的聚类。与大多数上述方法不同,CCPMVFGC 能够从多视图特征中学习共享潜在空间作为每个顶点的聚类偏好,并利用这个潜在空间来建模两两顶点之间的相互关系。CCPMVFGC 使用一种有效的方法来计算两两顶点特征之间的上下文相关性程度,并利用表示特征聚类偏好的视图特定潜在空间来对计算出的相关性进行建模。因此,CCPMVFGC 学习的聚类偏好是由多视图特征、两两特征的视图特定相关性以及图拓扑共同推断出来的。因此,我们为 CCPMVFGC 提出了一个统一的目标函数,并开发了一种迭代策略来解决所提出的优化问题。我们还对所提出的模型进行了理论分析,包括收敛证明和计算复杂度分析。在我们的实验中,我们在八个标准图数据集(六个多视图和两个单视图数据集)上广泛比较了所提出的 CCPMVFGC 与经典和最先进的图聚类方法。结果表明,CCPMVFGC 在所有八个数据集上都具有竞争性能,验证了所提出模型的有效性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验