• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于对抗性跨视图重建和信息瓶颈的对比图表示学习

Contrastive Graph Representation Learning with Adversarial Cross-View Reconstruction and Information Bottleneck.

作者信息

Shou Yuntao, Lan Haozhi, Cao Xiangyong

机构信息

School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an, 710049, China; Ministry of Education Key Laboratory for Intelligent Networks and Network Security, Xi'an Jiaotong University, Xi'an, 710049, China.

出版信息

Neural Netw. 2025 Apr;184:107094. doi: 10.1016/j.neunet.2024.107094. Epub 2025 Jan 9.

DOI:10.1016/j.neunet.2024.107094
PMID:39799719
Abstract

Graph Neural Networks (GNNs) have received extensive research attention due to their powerful information aggregation capabilities. Despite the success of GNNs, most of them suffer from the popularity bias issue in a graph caused by a small number of popular categories. Additionally, real graph datasets always contain incorrect node labels, which hinders GNNs from learning effective node representations. Graph contrastive learning (GCL) has been shown to be effective in solving the above problems for node classification tasks. Most existing GCL methods are implemented by randomly removing edges and nodes to create multiple contrasting views, and then maximizing the mutual information (MI) between these contrasting views to improve the node feature representation. However, maximizing the mutual information between multiple contrasting views may lead the model to learn some redundant information irrelevant to the node classification task. To tackle this issue, we propose an effective Contrastive Graph Representation Learning with Adversarial Cross-view Reconstruction and Information Bottleneck (CGRL) for node classification, which can adaptively learn to mask the nodes and edges in the graph to obtain the optimal graph structure representation. Furthermore, we innovatively introduce the information bottleneck theory into GCLs to remove redundant information in multiple contrasting views while retaining as much information as possible about node classification. Moreover, we add noise perturbations to the original views and reconstruct the augmented views by constructing adversarial views to improve the robustness of node feature representation. We also verified through theoretical analysis the effectiveness of this cross-attempt reconstruction mechanism and information bottleneck theory in capturing graph structure information and improving model generalization performance. Extensive experiments on real-world public datasets demonstrate that our method significantly outperforms existing state-of-the-art algorithms.

摘要

图神经网络(GNNs)因其强大的信息聚合能力而受到广泛的研究关注。尽管GNNs取得了成功,但它们中的大多数都存在由少数流行类别导致的图中的流行度偏差问题。此外,真实的图数据集总是包含不正确的节点标签,这阻碍了GNNs学习有效的节点表示。图对比学习(GCL)已被证明在解决节点分类任务的上述问题方面是有效的。大多数现有的GCL方法是通过随机删除边和节点来创建多个对比视图,然后最大化这些对比视图之间的互信息(MI)以改进节点特征表示。然而,最大化多个对比视图之间的互信息可能会导致模型学习到一些与节点分类任务无关的冗余信息。为了解决这个问题,我们提出了一种用于节点分类的有效方法——具有对抗性跨视图重建和信息瓶颈的对比图表示学习(CGRL),它可以自适应地学习掩盖图中的节点和边以获得最优的图结构表示。此外,我们创新性地将信息瓶颈理论引入到GCL中,以在多个对比视图中去除冗余信息,同时保留尽可能多的关于节点分类的信息。而且,我们对原始视图添加噪声扰动,并通过构建对抗视图来重建增强视图,以提高节点特征表示的鲁棒性。我们还通过理论分析验证了这种交叉尝试重建机制和信息瓶颈理论在捕获图结构信息和提高模型泛化性能方面的有效性。在真实世界公共数据集上的大量实验表明,我们的方法显著优于现有的最先进算法。

相似文献

1
Contrastive Graph Representation Learning with Adversarial Cross-View Reconstruction and Information Bottleneck.基于对抗性跨视图重建和信息瓶颈的对比图表示学习
Neural Netw. 2025 Apr;184:107094. doi: 10.1016/j.neunet.2024.107094. Epub 2025 Jan 9.
2
A Topology-Enhanced Multi-Viewed Contrastive Approach for Molecular Graph Representation Learning and Classification.一种用于分子图表示学习和分类的拓扑增强多视图对比方法。
Mol Inform. 2025 Jan;44(1):e202400252. doi: 10.1002/minf.202400252.
3
Local structure-aware graph contrastive representation learning.基于局部结构感知的图对比表示学习。
Neural Netw. 2024 Apr;172:106083. doi: 10.1016/j.neunet.2023.12.037. Epub 2023 Dec 27.
4
GTC: GNN-Transformer co-contrastive learning for self-supervised heterogeneous graph representation.GTC:用于自监督异构图表示的GNN-Transformer协同对比学习
Neural Netw. 2025 Jan;181:106645. doi: 10.1016/j.neunet.2024.106645. Epub 2024 Aug 16.
5
GMNI: Achieve good data augmentation in unsupervised graph contrastive learning.GMNI:在无监督图对比学习中实现良好的数据增强。
Neural Netw. 2025 Jan;181:106804. doi: 10.1016/j.neunet.2024.106804. Epub 2024 Oct 18.
6
Multitype view of knowledge contrastive learning for recommendation.用于推荐的知识对比学习的多类型视图
Neural Netw. 2025 Jan;181:106690. doi: 10.1016/j.neunet.2024.106690. Epub 2024 Sep 12.
7
ERMAV: Efficient and Robust Graph Contrastive Learning via Multiadversarial Views Training.ERMAV:通过多对抗视图训练实现高效且稳健的图对比学习
IEEE Trans Cybern. 2025 May;55(5):2188-2201. doi: 10.1109/TCYB.2025.3548175. Epub 2025 Apr 23.
8
Graph contrastive learning with node-level accurate difference.具有节点级精确差异的图对比学习
Fundam Res. 2024 Sep 3;5(2):818-829. doi: 10.1016/j.fmre.2024.06.013. eCollection 2025 Mar.
9
Understanding and mitigating dimensional collapse of Graph Contrastive Learning: A non-maximum removal approach.理解并缓解图对比学习中的维度坍缩:一种非最大值去除方法。
Neural Netw. 2025 Jan;181:106652. doi: 10.1016/j.neunet.2024.106652. Epub 2024 Aug 22.
10
Contrastive message passing for robust graph neural networks with sparse labels.用于具有稀疏标签的鲁棒图神经网络的对比消息传递
Neural Netw. 2025 Feb;182:106912. doi: 10.1016/j.neunet.2024.106912. Epub 2024 Nov 19.