• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

面向可泛化的图对比学习:信息论视角。

Towards generalizable Graph Contrastive Learning: An information theory perspective.

机构信息

Data Intelligence System Research Center, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, Beijing, China.

出版信息

Neural Netw. 2024 Apr;172:106125. doi: 10.1016/j.neunet.2024.106125. Epub 2024 Jan 17.

DOI:10.1016/j.neunet.2024.106125
PMID:38320348
Abstract

Graph Contrastive Learning (GCL) is increasingly employed in graph representation learning with the primary aim of learning node/graph representations from a predefined pretext task that can generalize to various downstream tasks. Meanwhile, the transition from a specific pretext task to diverse and unpredictable downstream tasks poses a significant challenge for GCL's generalization ability. Most existing GCL approaches maximize mutual information between two views derived from the original graph, either randomly or heuristically. However, the generalization ability of GCL and its theoretical principles are still less studied. In this paper, we introduce a novel metric GCL-GE, to quantify the generalization gap between predefined pretext and agnostic downstream tasks. Given the inherent intractability of GCL-GE, we leverage concepts from information theory to derive a mutual information upper bound that is independent of the downstream tasks, thus enabling the metric's optimization despite the variability in downstream tasks. Based on the theoretical insight, we propose InfoAdv, a GCL framework to directly enhance generalization by jointly optimizing GCL-GE and InfoMax. Extensive experiments validate the capability of InfoAdv to enhance performance across a wide variety of downstream tasks, demonstrating its effectiveness in improving the generalizability of GCL.

摘要

图对比学习(Graph Contrastive Learning,GCL)越来越多地被应用于图表示学习中,其主要目的是从预定义的前置任务中学习节点/图表示,这些表示可以泛化到各种下游任务中。然而,从特定的前置任务到多样化和不可预测的下游任务的转变对 GCL 的泛化能力提出了重大挑战。大多数现有的 GCL 方法通过最大化从原始图中随机或启发式得到的两个视图之间的互信息来最大化互信息。然而,GCL 的泛化能力及其理论原理仍然研究较少。在本文中,我们引入了一种新的度量 GCL-GE,用于量化预定义的前置任务和不可知的下游任务之间的泛化差距。由于 GCL-GE 的内在复杂性,我们利用信息论中的概念来推导出一个不依赖于下游任务的互信息上界,从而可以在下游任务变化的情况下优化该度量。基于理论见解,我们提出了 InfoAdv,这是一种 GCL 框架,可以通过联合优化 GCL-GE 和 InfoMax 来直接增强泛化能力。广泛的实验验证了 InfoAdv 在各种下游任务中提高性能的能力,证明了它在提高 GCL 的泛化能力方面的有效性。

相似文献

1
Towards generalizable Graph Contrastive Learning: An information theory perspective.面向可泛化的图对比学习:信息论视角。
Neural Netw. 2024 Apr;172:106125. doi: 10.1016/j.neunet.2024.106125. Epub 2024 Jan 17.
2
Local structure-aware graph contrastive representation learning.基于局部结构感知的图对比表示学习。
Neural Netw. 2024 Apr;172:106083. doi: 10.1016/j.neunet.2023.12.037. Epub 2023 Dec 27.
3
Understanding and mitigating dimensional collapse of Graph Contrastive Learning: A non-maximum removal approach.理解并缓解图对比学习中的维度坍缩:一种非最大值去除方法。
Neural Netw. 2025 Jan;181:106652. doi: 10.1016/j.neunet.2024.106652. Epub 2024 Aug 22.
4
TP-GCL: graph contrastive learning from the tensor perspective.TP-GCL:从张量角度进行的图对比学习
Front Neurorobot. 2024 May 21;18:1381084. doi: 10.3389/fnbot.2024.1381084. eCollection 2024.
5
DPGCL: Dual pass filtering based graph contrastive learning.DPGCL:基于双通滤波的图对比学习。
Neural Netw. 2024 Nov;179:106517. doi: 10.1016/j.neunet.2024.106517. Epub 2024 Jul 11.
6
A Good View for Graph Contrastive Learning.图对比学习的良好视角。
Entropy (Basel). 2024 Feb 27;26(3):208. doi: 10.3390/e26030208.
7
Community-CL: An Enhanced Community Detection Algorithm Based on Contrastive Learning.社区CL:一种基于对比学习的增强型社区检测算法。
Entropy (Basel). 2023 May 29;25(6):864. doi: 10.3390/e25060864.
8
Hierarchically Contrastive Hard Sample Mining for Graph Self-Supervised Pretraining.用于图自监督预训练的分层对比硬样本挖掘
IEEE Trans Neural Netw Learn Syst. 2024 Nov;35(11):16748-16761. doi: 10.1109/TNNLS.2023.3297607. Epub 2024 Oct 29.
9
Multi-view graph pooling with coarsened graph disentanglement.具有粗化图解缠的多视图图池化
Neural Netw. 2024 Jun;174:106221. doi: 10.1016/j.neunet.2024.106221. Epub 2024 Mar 4.
10
Affinity Uncertainty-Based Hard Negative Mining in Graph Contrastive Learning.图对比学习中基于亲和度不确定性的难负样本挖掘
IEEE Trans Neural Netw Learn Syst. 2024 Sep;35(9):11681-11691. doi: 10.1109/TNNLS.2023.3339770. Epub 2024 Sep 3.