Zhao Peiyao, Pan Yuangang, Li Xin, Chen Xu, Tsang Ivor W, Liao Lejian
IEEE Trans Neural Netw Learn Syst. 2024 Apr;35(4):4622-4634. doi: 10.1109/TNNLS.2022.3228556. Epub 2024 Apr 4.
Inspired by the impressive success of contrastive learning (CL), a variety of graph augmentation strategies have been employed to learn node representations in a self-supervised manner. Existing methods construct the contrastive samples by adding perturbations to the graph structure or node attributes. Although impressive results are achieved, it is rather blind to the wealth of prior information assumed: with the increase of the perturbation degree applied on the original graph: 1) the similarity between the original graph and the generated augmented graph gradually decreases and 2) the discrimination between all nodes within each augmented view gradually increases. In this article, we argue that both such prior information can be incorporated (differently) into the CL paradigm following our general ranking framework. In particular, we first interpret CL as a special case of learning to rank (L2R), which inspires us to leverage the ranking order among positive augmented views. Meanwhile, we introduce a self-ranking paradigm to ensure that the discriminative information among different nodes can be maintained and also be less altered to the perturbations of different degrees. Experiment results on various benchmark datasets verify the effectiveness of our algorithm compared with the supervised and unsupervised models.
受对比学习(CL)令人瞩目的成功启发,人们采用了多种图增强策略以自监督方式学习节点表示。现有方法通过对图结构或节点属性添加扰动来构建对比样本。尽管取得了令人印象深刻的结果,但对于所假设的丰富先验信息而言,这种做法相当盲目:随着对原始图施加的扰动程度增加,1)原始图与生成的增强图之间的相似度逐渐降低,2)每个增强视图内所有节点之间的区分度逐渐增加。在本文中,我们认为可以按照我们的通用排序框架(以不同方式)将这两种先验信息纳入CL范式。具体而言,我们首先将CL解释为排序学习(L2R)的一种特殊情况,这启发我们利用正增强视图之间的排序顺序。同时,我们引入一种自排序范式,以确保不同节点之间的判别信息能够得以保留,并且对不同程度的扰动也较少改变。在各种基准数据集上的实验结果验证了我们的算法与有监督和无监督模型相比的有效性。