• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

无增强的不变判别表示的图对比学习

Augmentation-Free Graph Contrastive Learning of Invariant-Discriminative Representations.

作者信息

Li Haifeng, Cao Jun, Zhu Jiawei, Luo Qinyao, He Silu, Wang Xuying

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):11157-11167. doi: 10.1109/TNNLS.2023.3248871. Epub 2024 Aug 5.

DOI:10.1109/TNNLS.2023.3248871
PMID:37028033
Abstract

Graph contrastive learning (GCL) is a promising direction toward alleviating the label dependence, poor generalization and weak robustness of graph neural networks, learning representations with invariance, and discriminability by solving pretasks. The pretasks are mainly built on mutual information estimation, which requires data augmentation to construct positive samples with similar semantics to learn invariant signals and negative samples with dissimilar semantics to empower representation discriminability. However, an appropriate data augmentation configuration depends heavily on lots of empirical trials such as choosing the compositions of data augmentation techniques and the corresponding hyperparameter settings. We propose an augmentation-free GCL method, invariant-discriminative GCL (iGCL), that does not intrinsically require negative samples. iGCL designs the invariant-discriminative loss (ID loss) to learn invariant and discriminative representations. On the one hand, ID loss learns invariant signals by directly minimizing the mean square error (MSE) between the target samples and positive samples in the representation space. On the other hand, ID loss ensures that the representations are discriminative by an orthonormal constraint forcing the different dimensions of representations to be independent of each other. This prevents representations from collapsing to a point or subspace. Our theoretical analysis explains the effectiveness of ID loss from the perspectives of the redundancy reduction criterion, canonical correlation analysis (CCA), and information bottleneck (IB) principle. The experimental results demonstrate that iGCL outperforms all baselines on five node classification benchmark datasets. iGCL also shows superior performance for different label ratios and is capable of resisting graph attacks, which indicates that iGCL has excellent generalization and robustness. The source code is available at https://github.com/lehaifeng/ T-GCN/tree/master/iGCL.

摘要

图对比学习(GCL)是减轻图神经网络标签依赖、泛化能力差和鲁棒性弱的一个有前景的方向,通过解决预任务来学习具有不变性和可区分性的表示。预任务主要基于互信息估计构建,这需要数据增强来构造具有相似语义的正样本以学习不变信号,以及具有不同语义的负样本以增强表示的可区分性。然而,合适的数据增强配置在很大程度上依赖于大量的经验试验,例如选择数据增强技术的组合以及相应的超参数设置。我们提出一种无需增强的GCL方法,即不变-可区分GCL(iGCL),它本质上不需要负样本。iGCL设计了不变-可区分损失(ID损失)来学习不变和可区分的表示。一方面,ID损失通过直接最小化表示空间中目标样本和正样本之间的均方误差(MSE)来学习不变信号。另一方面,ID损失通过正交约束确保表示具有可区分性,该约束迫使表示的不同维度相互独立。这防止了表示坍缩到一个点或子空间。我们的理论分析从冗余减少准则、典型相关分析(CCA)和信息瓶颈(IB)原理的角度解释了ID损失的有效性。实验结果表明,在五个节点分类基准数据集上,iGCL优于所有基线。iGCL在不同标签比例下也表现出卓越的性能,并且能够抵御图攻击,这表明iGCL具有出色的泛化能力和鲁棒性。源代码可在https://github.com/lehaifeng/T-GCN/tree/master/iGCL获取。

相似文献

1
Augmentation-Free Graph Contrastive Learning of Invariant-Discriminative Representations.无增强的不变判别表示的图对比学习
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):11157-11167. doi: 10.1109/TNNLS.2023.3248871. Epub 2024 Aug 5.
2
Graph contrastive learning with implicit augmentations.基于隐式增强的图对比学习。
Neural Netw. 2023 Jun;163:156-164. doi: 10.1016/j.neunet.2023.04.001. Epub 2023 Apr 5.
3
Contrastive Graph Representation Learning with Adversarial Cross-View Reconstruction and Information Bottleneck.基于对抗性跨视图重建和信息瓶颈的对比图表示学习
Neural Netw. 2025 Apr;184:107094. doi: 10.1016/j.neunet.2024.107094. Epub 2025 Jan 9.
4
Unsupervised graph-level representation learning with hierarchical contrasts.基于分层对比的无监督图级表示学习
Neural Netw. 2023 Jan;158:359-368. doi: 10.1016/j.neunet.2022.11.019. Epub 2022 Nov 26.
5
Generative and contrastive graph representation learning with message passing.基于消息传递的生成式和对比式图表示学习
Neural Netw. 2025 May;185:107224. doi: 10.1016/j.neunet.2025.107224. Epub 2025 Feb 6.
6
Contrastive graph auto-encoder for graph embedding.用于图嵌入的对比图自动编码器。
Neural Netw. 2025 Jul;187:107367. doi: 10.1016/j.neunet.2025.107367. Epub 2025 Mar 13.
7
Local contrastive loss with pseudo-label based self-training for semi-supervised medical image segmentation.基于伪标签自训练的局部对比损失的半监督医学图像分割。
Med Image Anal. 2023 Jul;87:102792. doi: 10.1016/j.media.2023.102792. Epub 2023 Mar 11.
8
Self-supervised contrastive graph representation with node and graph augmentation.自监督对比图表示与节点和图增强。
Neural Netw. 2023 Oct;167:223-232. doi: 10.1016/j.neunet.2023.08.039. Epub 2023 Aug 24.
9
Graph contrastive learning with node-level accurate difference.具有节点级精确差异的图对比学习
Fundam Res. 2024 Sep 3;5(2):818-829. doi: 10.1016/j.fmre.2024.06.013. eCollection 2025 Mar.
10
Affinity Uncertainty-Based Hard Negative Mining in Graph Contrastive Learning.图对比学习中基于亲和度不确定性的难负样本挖掘
IEEE Trans Neural Netw Learn Syst. 2024 Sep;35(9):11681-11691. doi: 10.1109/TNNLS.2023.3339770. Epub 2024 Sep 3.

引用本文的文献

1
Less is more: improving cell-type identification with augmentation-free single-cell RNA-Seq contrastive learning.少即是多:通过无增强单细胞RNA测序对比学习改进细胞类型识别
Bioinformatics. 2025 Sep 1;41(9). doi: 10.1093/bioinformatics/btaf437.
2
A Local Adversarial Attack with a Maximum Aggregated Region Sparseness Strategy for 3D Objects.一种针对3D物体的具有最大聚合区域稀疏性策略的局部对抗攻击。
J Imaging. 2025 Jan 13;11(1):25. doi: 10.3390/jimaging11010025.
3
Multimodal Contrastive Learning for Remote Sensing Image Feature Extraction Based on Relaxed Positive Samples.
基于松弛正样本的多模态对比学习用于遥感图像特征提取
Sensors (Basel). 2024 Dec 3;24(23):7719. doi: 10.3390/s24237719.
4
DNASimCLR: a contrastive learning-based deep learning approach for gene sequence data classification.DNASimCLR:一种基于对比学习的深度学习方法,用于基因序列数据分类。
BMC Bioinformatics. 2024 Oct 14;25(1):328. doi: 10.1186/s12859-024-05955-8.