Discipline of Business Analytics, The University of Sydney Business School, The University of Sydney, Australia; ByteDance AI Lab, Shanghai, China.
ByteDance AI Lab, Shanghai, China.
Neural Netw. 2023 Jun;163:156-164. doi: 10.1016/j.neunet.2023.04.001. Epub 2023 Apr 5.
Existing graph contrastive learning methods rely on augmentation techniques based on random perturbations (e.g., randomly adding or dropping edges and nodes). Nevertheless, altering certain edges or nodes can unexpectedly change the graph characteristics, and choosing the optimal perturbing ratio for each dataset requires onerous manual tuning. In this paper, we introduce Implicit Graph Contrastive Learning (iGCL), which utilizes augmentations in the latent space learned from a Variational Graph Auto-Encoder by reconstructing graph topological structure. Importantly, instead of explicitly sampling augmentations from latent distributions, we further propose an upper bound for the expected contrastive loss to improve the efficiency of our learning algorithm. Thus, graph semantics can be preserved within the augmentations in an intelligent way without arbitrary manual design or prior human knowledge. Experimental results on both graph-level and node-level show that the proposed method achieves state-of-the-art accuracy on downstream classification tasks compared to other graph contrastive baselines, where ablation studies in the end demonstrate the effectiveness of modules in iGCL.
现有的图对比学习方法依赖于基于随机扰动的增强技术(例如,随机添加或删除边和节点)。然而,改变某些边或节点可能会意外地改变图的特征,并且为每个数据集选择最佳的扰动比需要繁琐的手动调整。在本文中,我们介绍了隐式图对比学习(iGCL),它利用从变分图自动编码器学习的潜在空间中的增强来重建图拓扑结构。重要的是,我们不是从潜在分布中显式地采样增强,而是进一步提出了对比损失的上界来提高我们的学习算法的效率。因此,可以以智能的方式在增强中保留图语义,而无需任意的手动设计或先验的人类知识。在图级和节点级上的实验结果表明,与其他图对比基线相比,所提出的方法在下游分类任务中达到了最先进的准确性,最后进行的消融研究证明了 iGCL 中模块的有效性。