Suppr超能文献

局部和全局学习解缠图卷积网络

Learning Disentangled Graph Convolutional Networks Locally and Globally.

作者信息

Guo Jingwei, Huang Kaizhu, Yi Xinping, Zhang Rui

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 Mar;35(3):3640-3651. doi: 10.1109/TNNLS.2022.3195336. Epub 2024 Feb 29.

Abstract

Graph convolutional networks (GCNs) emerge as the most successful learning models for graph-structured data. Despite their success, existing GCNs usually ignore the entangled latent factors typically arising in real-world graphs, which results in nonexplainable node representations. Even worse, while the emphasis has been placed on local graph information, the global knowledge of the entire graph is lost to a certain extent. In this work, to address these issues, we propose a novel framework for GCNs, termed LGD-GCN, taking advantage of both local and global information for disentangling node representations in the latent space. Specifically, we propose to represent a disentangled latent continuous space with a statistical mixture model, by leveraging neighborhood routing mechanism locally. From the latent space, various new graphs can then be disentangled and learned, to overall reflect the hidden structures with respect to different factors. On the one hand, a novel regularizer is designed to encourage interfactor diversity for model expressivity in the latent space. On the other hand, the factor-specific information is encoded globally via employing a message passing along these new graphs, in order to strengthen intrafactor consistency. Extensive evaluations on both synthetic and five benchmark datasets show that LGD-GCN brings significant performance gains over the recent competitive models in both disentangling and node classification. Particularly, LGD-GCN is able to outperform averagely the disentangled state-of-the-arts by 7.4% on social network datasets.

摘要

图卷积网络(GCN)成为了用于图结构数据的最成功的学习模型。尽管取得了成功,但现有的GCN通常会忽略现实世界图中通常出现的纠缠潜在因素,这导致节点表示无法解释。更糟糕的是,虽然重点一直放在局部图信息上,但整个图的全局知识在一定程度上丢失了。在这项工作中,为了解决这些问题,我们提出了一种新颖的GCN框架,称为LGD - GCN,利用局部和全局信息在潜在空间中解开节点表示。具体来说,我们建议通过局部利用邻域路由机制,用统计混合模型来表示一个解开纠缠的潜在连续空间。从潜在空间中,各种新图随后可以被解开并学习,以整体反映关于不同因素的隐藏结构。一方面,设计了一种新颖的正则化器,以鼓励潜在空间中模型表现力的因素间多样性。另一方面,通过沿着这些新图采用消息传递来全局编码特定因素信息,以加强因素内一致性。在合成数据集和五个基准数据集上的广泛评估表明,LGD - GCN在解开纠缠和节点分类方面都比最近的竞争模型带来了显著的性能提升。特别是,在社交网络数据集上,LGD - GCN平均能够比解开纠缠的最先进模型高出7.4%。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验