Suppr超能文献

提出自己的观点:无需预制数据增强的图对比学习

Bringing Your Own View: Graph Contrastive Learning without Prefabricated Data Augmentations.

作者信息

You Yuning, Chen Tianlong, Wang Zhangyang, Shen Yang

机构信息

Texas A&M University.

University of Texas at Austin.

出版信息

Proc Int Conf Web Search Data Min. 2022 Feb;2022:1300-1309. doi: 10.1145/3488560.3498416. Epub 2022 Feb 15.

Abstract

Self-supervision is recently surging at its new frontier of graph learning. It facilitates graph representations beneficial to downstream tasks; but its success could hinge on domain knowledge for handcraft or the often expensive trials and errors. Even its state-of-the-art representative, graph contrastive learning (GraphCL), is not completely free of those needs as GraphCL uses a prefabricated prior reflected by the ad-hoc manual selection of graph data augmentations. Our work aims at advancing GraphCL by answering the following questions: Accordingly, we have extended the prefabricated discrete prior in the augmentation set, to a learnable continuous prior in the parameter space of graph generators, assuming that graph priors , similar to the concept of image manifolds, can be learned by data generation. Furthermore, to form contrastive views without collapsing to trivial solutions due to the prior learnability, we have leveraged both principles of information minimization (InfoMin) and information bottleneck (InfoBN) to regularize the learned priors. Eventually, contrastive learning, InfoMin, and InfoBN are incorporated organically into one framework of bi-level optimization. Our principled and automated approach has proven to be competitive against the state-of-the-art graph self-supervision methods, including GraphCL, on benchmarks of small graphs; and shown even better generalizability on large-scale graphs, without resorting to human expertise or downstream validation. Our code is publicly released at https://github.com/Shen-Lab/GraphCL_Automated.

摘要

自我监督最近在图学习的新前沿领域蓬勃发展。它有助于生成对下游任务有益的图表示;但其成功可能取决于手工制作的领域知识或通常代价高昂的反复试验。即使是其最先进的代表性方法——图对比学习(GraphCL),也并非完全摆脱了这些需求,因为GraphCL使用了由临时手动选择图数据增强所反映的预制先验。我们的工作旨在通过回答以下问题来推进GraphCL:因此,我们将增强集中预制的离散先验扩展为图生成器参数空间中可学习的连续先验,假设图先验类似于图像流形的概念,可以通过数据生成来学习。此外,为了在不由于先验可学习性而陷入平凡解的情况下形成对比视图,我们利用了信息最小化(InfoMin)和信息瓶颈(InfoBN)这两个原则来正则化学习到的先验。最终,对比学习、InfoMin和InfoBN被有机地整合到一个双层优化框架中。我们基于原则的自动化方法已被证明在小图基准测试中与包括GraphCL在内的最先进的图自我监督方法具有竞争力;并且在大规模图上显示出更好的泛化能力,而无需借助人类专业知识或下游验证。我们的代码已在https://github.com/Shen-Lab/GraphCL_Automated上公开发布。

相似文献

1
Bringing Your Own View: Graph Contrastive Learning without Prefabricated Data Augmentations.提出自己的观点:无需预制数据增强的图对比学习
Proc Int Conf Web Search Data Min. 2022 Feb;2022:1300-1309. doi: 10.1145/3488560.3498416. Epub 2022 Feb 15.
2
Multi-relational graph contrastive learning with learnable graph augmentation.基于可学习图增强的多关系图对比学习
Neural Netw. 2025 Jan;181:106757. doi: 10.1016/j.neunet.2024.106757. Epub 2024 Sep 26.
6
Graph contrastive learning with implicit augmentations.基于隐式增强的图对比学习。
Neural Netw. 2023 Jun;163:156-164. doi: 10.1016/j.neunet.2023.04.001. Epub 2023 Apr 5.
7
Self-supervised contrastive graph representation with node and graph augmentation.自监督对比图表示与节点和图增强。
Neural Netw. 2023 Oct;167:223-232. doi: 10.1016/j.neunet.2023.08.039. Epub 2023 Aug 24.
9
Prototypical Graph Contrastive Learning.典型的图对比学习
IEEE Trans Neural Netw Learn Syst. 2024 Feb;35(2):2747-2758. doi: 10.1109/TNNLS.2022.3191086. Epub 2024 Feb 5.
10

本文引用的文献

3
Prototypical Graph Contrastive Learning.典型的图对比学习
IEEE Trans Neural Netw Learn Syst. 2024 Feb;35(2):2747-2758. doi: 10.1109/TNNLS.2022.3191086. Epub 2024 Feb 5.
4
Self-Supervised Learning of Graph Neural Networks: A Unified Review.图神经网络的自监督学习:统一综述。
IEEE Trans Pattern Anal Mach Intell. 2023 Feb;45(2):2412-2429. doi: 10.1109/TPAMI.2022.3170559. Epub 2023 Jan 6.
6
MoleculeNet: a benchmark for molecular machine learning.分子网络:分子机器学习的一个基准
Chem Sci. 2017 Oct 31;9(2):513-530. doi: 10.1039/c7sc02664a. eCollection 2018 Jan 14.
7
ZINC 15--Ligand Discovery for Everyone.锌15——面向大众的配体发现平台。
J Chem Inf Model. 2015 Nov 23;55(11):2324-37. doi: 10.1021/acs.jcim.5b00559. Epub 2015 Nov 9.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验