Suppr超能文献

迈向基于对比调整缩放的图自监督学习

Toward Graph Self-Supervised Learning With Contrastive Adjusted Zooming.

作者信息

Zheng Yizhen, Jin Ming, Pan Shirui, Li Yuan-Fang, Peng Hao, Li Ming, Li Zhao

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 Jul;35(7):8882-8896. doi: 10.1109/TNNLS.2022.3216630. Epub 2024 Jul 8.

Abstract

Graph representation learning (GRL) is critical for graph-structured data analysis. However, most of the existing graph neural networks (GNNs) heavily rely on labeling information, which is normally expensive to obtain in the real world. Although some existing works aim to effectively learn graph representations in an unsupervised manner, they suffer from certain limitations, such as the heavy reliance on monotone contrastiveness and limited scalability. To overcome the aforementioned problems, in light of the recent advancements in graph contrastive learning, we introduce a novel self-supervised GRL algorithm via graph contrastive adjusted zooming, namely, G-Zoom, to learn node representations by leveraging the proposed adjusted zooming scheme. Specifically, this mechanism enables G-Zoom to explore and extract self-supervision signals from a graph from multiple scales: micro (i.e., node level), meso (i.e., neighborhood level), and macro (i.e., subgraph level). First, we generate two augmented views of the input graph via two different graph augmentations. Then, we establish three different contrastiveness on the above three scales progressively, from node, neighboring, to subgraph level, where we maximize the agreement between graph representations across scales. While we can extract valuable clues from a given graph on the micro and macro perspectives, the neighboring-level contrastiveness offers G-Zoom the capability of a customizable option based on our adjusted zooming scheme to manually choose an optimal viewpoint that lies between the micro and macro perspectives to better understand the graph data. In addition, to make our model scalable to large graphs, we use a parallel graph diffusion approach to decouple model training from the graph size. We have conducted extensive experiments on real-world datasets, and the results demonstrate that our proposed model outperforms the state-of-the-art methods consistently.

摘要

图表示学习(GRL)对于图结构数据分析至关重要。然而,大多数现有的图神经网络(GNN)严重依赖标签信息,而在现实世界中获取这些信息通常成本很高。尽管一些现有工作旨在以无监督的方式有效地学习图表示,但它们存在某些局限性,例如严重依赖单调对比性和有限的可扩展性。为了克服上述问题,鉴于图对比学习的最新进展,我们通过图对比调整缩放引入了一种新颖的自监督GRL算法,即G-Zoom,以利用所提出的调整缩放方案来学习节点表示。具体而言,这种机制使G-Zoom能够从多个尺度探索和提取图中的自监督信号:微观(即节点级别)、中观(即邻域级别)和宏观(即子图级别)。首先,我们通过两种不同的图增强方法生成输入图的两个增强视图。然后,我们从节点、邻域到子图级别逐步在上述三个尺度上建立三种不同的对比性,在这个过程中我们最大化跨尺度的图表示之间的一致性。虽然我们可以从给定图的微观和宏观角度提取有价值的线索,但邻域级对比性为G-Zoom提供了基于我们的调整缩放方案进行可定制选择的能力,以便手动选择一个介于微观和宏观角度之间的最佳视角,从而更好地理解图数据。此外,为了使我们的模型能够扩展到大型图,我们使用并行图扩散方法将模型训练与图的大小解耦。我们在真实世界数据集上进行了广泛的实验,结果表明我们提出的模型始终优于现有最先进的方法。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验