Suppr超能文献

用于图级表示学习的图显式池化

Graph explicit pooling for graph-level representation learning.

作者信息

Liu Chuang, Yu Wenhang, Gao Kuang, Ma Xueqi, Zhan Yibing, Wu Jia, Hu Wenbin, Du Bo

机构信息

School of Computer Science, Wuhan University, Wuhan, China.

Changjiang Schinta Software Technology Co. LTD., Wuhan, China; Internet+ Intelligent Water Conservancy Key Laboratory of Changjiang Water Resources Commission, Wuhan, China.

出版信息

Neural Netw. 2025 Jan;181:106790. doi: 10.1016/j.neunet.2024.106790. Epub 2024 Oct 11.

Abstract

Graph pooling has been increasingly recognized as crucial for Graph Neural Networks (GNNs) to facilitate hierarchical graph representation learning. Existing graph pooling methods commonly consist of two stages: selecting top-ranked nodes and discarding the remaining to construct coarsened graph representations. However, this paper highlights two key issues with these methods: (1) The process of selecting nodes to discard frequently employs additional Graph Convolutional Networks or Multilayer Perceptrons, lacking a thorough evaluation of each node's impact on the final graph representation and subsequent prediction tasks. (2) Current graph pooling methods tend to directly discard the noise segment (dropped) of the graph without accounting for the latent information contained within these elements. To address the first issue, we introduce a novel Graph explicit Pooling (GrePool) method, which selects nodes by explicitly leveraging the relationships between the nodes and final representation vectors crucial for classification. The second issue is addressed using an extended version of GrePool (i.e., GrePool+), which applies a uniform loss on the discarded nodes. This addition is designed to augment the training process and improve classification accuracy. Furthermore, we conduct comprehensive experiments across 12 widely used datasets to validate our proposed method's effectiveness, including the Open Graph Benchmark datasets. Our experimental results uniformly demonstrate that GrePool outperforms 14 baseline methods for most datasets. Likewise, implementing GrePool+ enhances GrePool's performance without incurring additional computational costs. The code is available at https://github.com/LiuChuang0059/GrePool.

摘要

图池化已越来越被认为对于图神经网络(GNN)促进分层图表示学习至关重要。现有的图池化方法通常由两个阶段组成:选择排名靠前的节点并丢弃其余节点以构建粗化的图表示。然而,本文强调了这些方法的两个关键问题:(1)选择要丢弃节点的过程经常使用额外的图卷积网络或多层感知器,缺乏对每个节点对最终图表示和后续预测任务影响的全面评估。(2)当前的图池化方法倾向于直接丢弃图的噪声部分(被舍弃部分),而不考虑这些元素中包含的潜在信息。为了解决第一个问题,我们引入了一种新颖的图显式池化(GrePool)方法,该方法通过明确利用节点与对分类至关重要的最终表示向量之间的关系来选择节点。第二个问题通过GrePool的扩展版本(即GrePool+)来解决,该版本对被丢弃的节点应用统一损失。这样做旨在增强训练过程并提高分类准确性。此外,我们在12个广泛使用的数据集上进行了全面实验,以验证我们提出的方法的有效性,包括开放图基准数据集。我们的实验结果一致表明,对于大多数数据集,GrePool优于14种基线方法。同样,实现GrePool+可提高GrePool的性能,而不会产生额外的计算成本。代码可在https://github.com/LiuChuang0059/GrePool获取。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验