Suppr超能文献

CLEAR:用于自监督图表示学习的聚类增强对比度

CLEAR: Cluster-Enhanced Contrast for Self-Supervised Graph Representation Learning.

作者信息

Luo Xiao, Ju Wei, Qu Meng, Gu Yiyang, Chen Chong, Deng Minghua, Hua Xian-Sheng, Zhang Ming

出版信息

IEEE Trans Neural Netw Learn Syst. 2022 Jun 8;PP. doi: 10.1109/TNNLS.2022.3177775.

Abstract

This article studies self-supervised graph representation learning, which is critical to various tasks, such as protein property prediction. Existing methods typically aggregate representations of each individual node as graph representations, but fail to comprehensively explore local substructures (i.e., motifs and subgraphs), which also play important roles in many graph mining tasks. In this article, we propose a self-supervised graph representation learning framework named cluster-enhanced Contrast (CLEAR) that models the structural semantics of a graph from graph-level and substructure-level granularities, i.e., global semantics and local semantics, respectively. Specifically, we use graph-level augmentation strategies followed by a graph neural network-based encoder to explore global semantics. As for local semantics, we first use graph clustering techniques to partition each whole graph into several subgraphs while preserving as much semantic information as possible. We further employ a self-attention interaction module to aggregate the semantics of all subgraphs into a local-view graph representation. Moreover, we integrate both global semantics and local semantics into a multiview graph contrastive learning framework, enhancing the semantic-discriminative ability of graph representations. Extensive experiments on various real-world benchmarks demonstrate the efficacy of the proposed over current graph self-supervised representation learning approaches on both graph classification and transfer learning tasks.

摘要

本文研究自监督图表示学习,这对蛋白质属性预测等各种任务至关重要。现有方法通常将每个单独节点的表示聚合为图表示,但未能全面探索局部子结构(即基序和子图),而局部子结构在许多图挖掘任务中也起着重要作用。在本文中,我们提出了一个名为聚类增强对比(CLEAR)的自监督图表示学习框架,该框架分别从图级和子结构级粒度(即全局语义和局部语义)对图的结构语义进行建模。具体来说,我们使用图级增强策略,然后通过基于图神经网络的编码器来探索全局语义。至于局部语义,我们首先使用图聚类技术将每个完整图划分为几个子图,同时尽可能保留更多语义信息。我们进一步采用自注意力交互模块将所有子图的语义聚合为局部视图图表示。此外,我们将全局语义和局部语义都集成到多视图图对比学习框架中,增强图表示的语义判别能力。在各种真实世界基准上进行的大量实验证明了所提出方法在图分类和迁移学习任务上比当前图自监督表示学习方法更有效。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验