Zhuo Wei, Tan Guang
IEEE Trans Neural Netw Learn Syst. 2024 Oct;35(10):14301-14314. doi: 10.1109/TNNLS.2023.3278183. Epub 2024 Oct 7.
Graph neural networks (GNNs) have been successful in a variety of graph-based applications. Recently, it is shown that capturing long-range relationships between nodes helps improve the performance of GNNs. The phenomenon is mostly confirmed in a supervised learning setting. In this article, inspired by contrastive learning (CL), we propose an unsupervised learning pipeline, in which different types of long-range similarity information are injected into the GNN model in an efficient way. We reconstruct the original graph in feature and topology spaces to generate three augmented views. During training, our model alternately picks an augmented view, and maximizes an agreement between the representations of the view and the original graph. Importantly, we identify the issue of diminishing utility of the augmented views as the model gradually learns useful information from the views. Hence, we propose a view update scheme that adaptively adjusts the augmented views, so that the views can continue to provide new information that helps with CL. The updated augmented views and the original graph are jointly used to train a shared GNN encoder by optimizing an efficient channel-level contrastive objective. We conduct extensive experiments on six assortative graphs and three disassortative graphs, which demonstrate the effectiveness of our method.
图神经网络(GNN)在各种基于图的应用中都取得了成功。最近,研究表明捕捉节点之间的长程关系有助于提高GNN的性能。这种现象大多在监督学习环境中得到证实。在本文中,受对比学习(CL)的启发,我们提出了一种无监督学习管道,其中不同类型的长程相似性信息以高效的方式注入到GNN模型中。我们在特征和拓扑空间中重建原始图以生成三个增强视图。在训练过程中,我们的模型交替选择一个增强视图,并最大化该视图与原始图表示之间的一致性。重要的是,随着模型逐渐从视图中学习有用信息,我们发现了增强视图效用递减的问题。因此,我们提出了一种视图更新方案,自适应地调整增强视图,以便视图能够继续提供有助于对比学习的新信息。通过优化一个高效的通道级对比目标,将更新后的增强视图和原始图联合用于训练一个共享的GNN编码器。我们在六个同配图和三个异配图上进行了广泛的实验,结果证明了我们方法的有效性。