• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于隐式增强的图对比学习。

Graph contrastive learning with implicit augmentations.

机构信息

Discipline of Business Analytics, The University of Sydney Business School, The University of Sydney, Australia; ByteDance AI Lab, Shanghai, China.

ByteDance AI Lab, Shanghai, China.

出版信息

Neural Netw. 2023 Jun;163:156-164. doi: 10.1016/j.neunet.2023.04.001. Epub 2023 Apr 5.

DOI:10.1016/j.neunet.2023.04.001
PMID:37054514
Abstract

Existing graph contrastive learning methods rely on augmentation techniques based on random perturbations (e.g., randomly adding or dropping edges and nodes). Nevertheless, altering certain edges or nodes can unexpectedly change the graph characteristics, and choosing the optimal perturbing ratio for each dataset requires onerous manual tuning. In this paper, we introduce Implicit Graph Contrastive Learning (iGCL), which utilizes augmentations in the latent space learned from a Variational Graph Auto-Encoder by reconstructing graph topological structure. Importantly, instead of explicitly sampling augmentations from latent distributions, we further propose an upper bound for the expected contrastive loss to improve the efficiency of our learning algorithm. Thus, graph semantics can be preserved within the augmentations in an intelligent way without arbitrary manual design or prior human knowledge. Experimental results on both graph-level and node-level show that the proposed method achieves state-of-the-art accuracy on downstream classification tasks compared to other graph contrastive baselines, where ablation studies in the end demonstrate the effectiveness of modules in iGCL.

摘要

现有的图对比学习方法依赖于基于随机扰动的增强技术(例如,随机添加或删除边和节点)。然而,改变某些边或节点可能会意外地改变图的特征,并且为每个数据集选择最佳的扰动比需要繁琐的手动调整。在本文中,我们介绍了隐式图对比学习(iGCL),它利用从变分图自动编码器学习的潜在空间中的增强来重建图拓扑结构。重要的是,我们不是从潜在分布中显式地采样增强,而是进一步提出了对比损失的上界来提高我们的学习算法的效率。因此,可以以智能的方式在增强中保留图语义,而无需任意的手动设计或先验的人类知识。在图级和节点级上的实验结果表明,与其他图对比基线相比,所提出的方法在下游分类任务中达到了最先进的准确性,最后进行的消融研究证明了 iGCL 中模块的有效性。

相似文献

1
Graph contrastive learning with implicit augmentations.基于隐式增强的图对比学习。
Neural Netw. 2023 Jun;163:156-164. doi: 10.1016/j.neunet.2023.04.001. Epub 2023 Apr 5.
2
MoCL: Data-driven Molecular Fingerprint via Knowledge-aware Contrastive Learning from Molecular Graph.MoCL:通过基于分子图的知识感知对比学习实现的数据驱动分子指纹
KDD. 2021 Aug;2021:3585-3594. doi: 10.1145/3447548.3467186. Epub 2021 Aug 14.
3
Community-CL: An Enhanced Community Detection Algorithm Based on Contrastive Learning.社区CL:一种基于对比学习的增强型社区检测算法。
Entropy (Basel). 2023 May 29;25(6):864. doi: 10.3390/e25060864.
4
Unsupervised graph-level representation learning with hierarchical contrasts.基于分层对比的无监督图级表示学习
Neural Netw. 2023 Jan;158:359-368. doi: 10.1016/j.neunet.2022.11.019. Epub 2022 Nov 26.
5
Local structure-aware graph contrastive representation learning.基于局部结构感知的图对比表示学习。
Neural Netw. 2024 Apr;172:106083. doi: 10.1016/j.neunet.2023.12.037. Epub 2023 Dec 27.
6
Attention-wise masked graph contrastive learning for predicting molecular property.基于注意力机制的掩码图对比学习预测分子性质。
Brief Bioinform. 2022 Sep 20;23(5). doi: 10.1093/bib/bbac303.
7
Self-supervised contrastive graph representation with node and graph augmentation.自监督对比图表示与节点和图增强。
Neural Netw. 2023 Oct;167:223-232. doi: 10.1016/j.neunet.2023.08.039. Epub 2023 Aug 24.
8
Graph Clustering with High-Order Contrastive Learning.基于高阶对比学习的图聚类
Entropy (Basel). 2023 Oct 10;25(10):1432. doi: 10.3390/e25101432.
9
Accurate graph classification via two-staged contrastive curriculum learning.通过两阶段对比课程学习实现准确的图分类。
PLoS One. 2024 Jan 3;19(1):e0296171. doi: 10.1371/journal.pone.0296171. eCollection 2024.
10
KAMPNet: multi-source medical knowledge augmented medication prediction network with multi-level graph contrastive learning.KAMPNet:基于多层次图对比学习的多源医学知识增强药物预测网络。
BMC Med Inform Decis Mak. 2023 Oct 30;23(1):243. doi: 10.1186/s12911-023-02325-x.

引用本文的文献

1
Exploring the Latent Information in Spatial Transcriptomics Data via Multi-View Graph Convolutional Network Based on Implicit Contrastive Learning.基于隐式对比学习的多视图图卷积网络探索空间转录组学数据中的潜在信息
Adv Sci (Weinh). 2025 Jun;12(21):e2413545. doi: 10.1002/advs.202413545. Epub 2025 Apr 30.
2
Single-step retrosynthesis prediction via multitask graph representation learning.通过多任务图表示学习进行单步逆合成预测。
Nat Commun. 2025 Jan 18;16(1):814. doi: 10.1038/s41467-025-56062-y.
3
Fine-grained Patient Similarity Measuring using Contrastive Graph Similarity Networks.
使用对比图相似性网络的细粒度患者相似性测量
Proc (IEEE Int Conf Healthc Inform). 2024 Jun;2024:1-10. doi: 10.1109/ichi61247.2024.00009. Epub 2024 Aug 22.