Chen Zhikui, Li Lifang, Zhang Xu, Wang Han
DUT School of Software Technology and DUT-RU International School of Information Science and Engineering, Dalian University of Technology, TuQiang 321 street, Development Zone, Dalian, 116620, Liaoning, China.
DUT School of Software Technology and DUT-RU International School of Information Science and Engineering, Dalian University of Technology, TuQiang 321 street, Development Zone, Dalian, 116620, Liaoning, China.
Neural Netw. 2025 Mar;183:106927. doi: 10.1016/j.neunet.2024.106927. Epub 2024 Nov 22.
Deep graph clustering is a fundamental yet challenging task for graph data analysis. Recent efforts have witnessed significant success in combining autoencoder and graph convolutional network to explore graph-structured data. However, we observe that these approaches tend to map different nodes into the same representation, thus resulting in less discriminative node feature representation and limited clustering performance. Although some contrastive graph clustering methods alleviate the problem, they heavily depend on the carefully selected data augmentations, which greatly limits the capability of contrastive learning. Otherwise, they fail to consider the self-consistency between node representations and cluster assignments, thus affecting the clustering performance. To solve these issues, we propose a novel contrastive deep graph clustering method termed Aligning Representation Learning Network (ARLN). Specifically, we utilize contrastive learning between an autoencoder and a graph autoencoder to avoid conducting complex data augmentations. Moreover, we introduce an instance contrastive module and a feature contrastive module for consensus representation learning. Such modules are able to learn a discriminative node representation via contrastive learning. In addition, we design a novel assignment probability contrastive module to maintain the self-consistency between node representations and cluster assignments. Extensive experimental results on three benchmark datasets show the superiority of the proposed ARLN against the existing state-of-the-art deep graph clustering methods.
深度图聚类是图数据分析中一项基础但具有挑战性的任务。最近的研究成果表明,在将自动编码器和图卷积网络相结合以探索图结构数据方面取得了显著成功。然而,我们观察到这些方法倾向于将不同的节点映射到相同的表示中,从而导致节点特征表示的区分性较差以及聚类性能有限。尽管一些对比图聚类方法缓解了这个问题,但它们严重依赖精心选择的数据增强方法,这极大地限制了对比学习的能力。否则,它们没有考虑节点表示和聚类分配之间的自一致性,从而影响聚类性能。为了解决这些问题,我们提出了一种新颖的对比深度图聚类方法,称为对齐表示学习网络(ARLN)。具体来说,我们利用自动编码器和图自动编码器之间的对比学习来避免进行复杂的数据增强。此外,我们引入了一个实例对比模块和一个特征对比模块用于一致性表示学习。这样的模块能够通过对比学习学习到有区分性的节点表示。另外,我们设计了一个新颖的分配概率对比模块来保持节点表示和聚类分配之间的自一致性。在三个基准数据集上的大量实验结果表明,所提出的ARLN相对于现有的深度图聚类方法具有优越性。