Suppr超能文献

具有节点级精确差异的图对比学习

Graph contrastive learning with node-level accurate difference.

作者信息

Jiao Pengfei, Yu Kaiyan, Bao Qing, Jiang Ying, Guo Xuan, Zhao Zhidong

机构信息

School of Cyberspace, Hangzhou Dianzi University, Hangzhou 310018, China.

Data Security Governance Zhejiang Engineering Research Center, Hangzhou Dianzi University, Hangzhou 310018, China.

出版信息

Fundam Res. 2024 Sep 3;5(2):818-829. doi: 10.1016/j.fmre.2024.06.013. eCollection 2025 Mar.

Abstract

Graph contrastive learning (GCL) has attracted extensive research interest due to its powerful ability to capture latent structural and semantic information of graphs in a self-supervised manner. Existing GCL methods commonly adopt predefined graph augmentations to generate two contrastive views. Subsequently, they design a contrastive pretext task between these views with the goal of maximizing their agreement. These methods assume the augmented graph can fully preserve the semantics of the original. However, typical data augmentation strategies in GCL, such as random edge dropping, may alter the properties of the original graph. As a result, previous GCL methods overlooked graph differences, potentially leading to difficulty distinguishing between graphs that are structurally similar but semantically different. Therefore, we argue that it is necessary to design a method that can quantify the dissimilarity between the original and augmented graphs to more accurately capture the relationships between samples. In this work, we propose a novel graph contrastive learning framework, named Accurate Difference-based Node-Level Graph Contrastive Learning (DNGCL), which helps the model distinguish similar graphs with slight differences by learning node-level differences between graphs. Specifically, we train the model to distinguish between original and augmented nodes via a node discriminator and employ cosine dissimilarity to accurately measure the difference between each node. Furthermore, we employ multiple types of data augmentation commonly used in current GCL methods on the original graph, aiming to learn the differences between nodes under different augmentation strategies and help the model learn richer local information. We conduct extensive experiments on six benchmark datasets and the results show that our DNGCL outperforms most state-of-the-art baselines, which strongly validates the effectiveness of our model.

摘要

图对比学习(GCL)因其能够以自监督方式捕获图的潜在结构和语义信息的强大能力而吸引了广泛的研究兴趣。现有的GCL方法通常采用预定义的图增强来生成两个对比视图。随后,它们在这些视图之间设计一个对比性的预训练任务,目标是最大化它们的一致性。这些方法假设增强后的图能够完全保留原图的语义。然而,GCL中典型的数据增强策略,如随机边删除,可能会改变原图的属性。因此,之前的GCL方法忽略了图之间的差异,这可能导致难以区分结构相似但语义不同的图。所以,我们认为有必要设计一种能够量化原图与增强图之间差异的方法,以便更准确地捕捉样本之间的关系。在这项工作中,我们提出了一种新颖的图对比学习框架,名为基于准确差异的节点级图对比学习(DNGCL),它通过学习图之间的节点级差异,帮助模型区分细微不同的相似图。具体来说,我们通过一个节点判别器训练模型来区分原始节点和增强节点,并使用余弦相似度来准确测量每个节点之间的差异。此外,我们在原图上采用当前GCL方法中常用的多种类型的数据增强,旨在学习不同增强策略下节点之间的差异,并帮助模型学习更丰富的局部信息。我们在六个基准数据集上进行了广泛的实验,结果表明我们的DNGCL优于大多数最先进的基线方法,这有力地验证了我们模型的有效性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0db5/11997587/28b87183d2ad/ga1.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验