• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于自监督学习的图多头注意力池化

Graph Multihead Attention Pooling with Self-Supervised Learning.

作者信息

Wang Yu, Hu Liang, Wu Yang, Gao Wanfu

机构信息

College of Computer Science and Technology, Jilin University, Changchun 130012, China.

出版信息

Entropy (Basel). 2022 Nov 29;24(12):1745. doi: 10.3390/e24121745.

DOI:10.3390/e24121745
PMID:36554149
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9777688/
Abstract

Graph neural networks (GNNs), which work with graph-structured data, have attracted considerable attention and achieved promising performance on graph-related tasks. While the majority of existing GNN methods focus on the convolutional operation for encoding the node representations, the graph pooling operation, which maps the set of nodes into a coarsened graph, is crucial for graph-level tasks. We argue that a well-defined graph pooling operation should avoid the information loss of the local node features and global graph structure. In this paper, we propose a hierarchical graph pooling method based on the multihead attention mechanism, namely GMAPS, which compresses both node features and graph structure into the coarsened graph. Specifically, a multihead attention mechanism is adopted to arrange nodes into a coarsened graph based on their features and structural dependencies between nodes. In addition, to enhance the expressiveness of the cluster representations, a self-supervised mechanism is introduced to maximize the mutual information between the cluster representations and the global representation of the hierarchical graph. Our experimental results show that the proposed GMAPS obtains significant and consistent performance improvements compared with state-of-the-art baselines on six benchmarks from the biological and social domains of graph classification and reconstruction tasks.

摘要

图神经网络(GNNs)用于处理图结构数据,在与图相关的任务上引起了广泛关注并取得了良好的性能。虽然大多数现有的GNN方法专注于用于编码节点表示的卷积操作,但将节点集映射到粗化图的图池化操作对于图级任务至关重要。我们认为,一个定义良好的图池化操作应避免局部节点特征和全局图结构的信息损失。在本文中,我们提出了一种基于多头注意力机制的分层图池化方法,即GMAPS,它将节点特征和图结构都压缩到粗化图中。具体来说,采用多头注意力机制根据节点的特征和节点之间的结构依赖关系将节点排列成粗化图。此外,为了增强聚类表示的表现力,引入了一种自监督机制,以最大化聚类表示与分层图的全局表示之间的互信息。我们的实验结果表明,在图分类和重建任务的生物和社会领域的六个基准上,与现有最先进的基线相比,所提出的GMAPS取得了显著且一致的性能提升。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fd8/9777688/8543d5edf9d3/entropy-24-01745-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fd8/9777688/0e93d91b0bc7/entropy-24-01745-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fd8/9777688/71c877b00822/entropy-24-01745-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fd8/9777688/da1c0db6eb65/entropy-24-01745-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fd8/9777688/5232e8c301eb/entropy-24-01745-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fd8/9777688/1fec023e8559/entropy-24-01745-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fd8/9777688/8543d5edf9d3/entropy-24-01745-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fd8/9777688/0e93d91b0bc7/entropy-24-01745-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fd8/9777688/71c877b00822/entropy-24-01745-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fd8/9777688/da1c0db6eb65/entropy-24-01745-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fd8/9777688/5232e8c301eb/entropy-24-01745-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fd8/9777688/1fec023e8559/entropy-24-01745-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5fd8/9777688/8543d5edf9d3/entropy-24-01745-g006.jpg

相似文献

1
Graph Multihead Attention Pooling with Self-Supervised Learning.基于自监督学习的图多头注意力池化
Entropy (Basel). 2022 Nov 29;24(12):1745. doi: 10.3390/e24121745.
2
Hierarchical Representation Learning in Graph Neural Networks With Node Decimation Pooling.基于节点抽取池化的图神经网络分层表示学习
IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):2195-2207. doi: 10.1109/TNNLS.2020.3044146. Epub 2022 May 2.
3
CCP-GNN: Competitive Covariance Pooling for Improving Graph Neural Networks.CCP-GNN:用于改进图神经网络的竞争协方差池化
IEEE Trans Neural Netw Learn Syst. 2025 Apr;36(4):6395-6406. doi: 10.1109/TNNLS.2024.3390249. Epub 2025 Apr 4.
4
Augmented Graph Neural Network with hierarchical global-based residual connections.基于层次全局残差连接的增强图神经网络。
Neural Netw. 2022 Jun;150:149-166. doi: 10.1016/j.neunet.2022.03.008. Epub 2022 Mar 10.
5
Multivariate time-series classification with hierarchical variational graph pooling.基于层次化变分图池化的多元时间序列分类。
Neural Netw. 2022 Oct;154:481-490. doi: 10.1016/j.neunet.2022.07.032. Epub 2022 Aug 2.
6
Multi-level attention pooling for graph neural networks: Unifying graph representations with multiple localities.图神经网络的多层次注意池化:统一具有多个局部性的图表示。
Neural Netw. 2022 Jan;145:356-373. doi: 10.1016/j.neunet.2021.11.001. Epub 2021 Nov 10.
7
Second-Order Pooling for Graph Neural Networks.图神经网络的二阶池化。
IEEE Trans Pattern Anal Mach Intell. 2023 Jun;45(6):6870-6880. doi: 10.1109/TPAMI.2020.2999032. Epub 2023 May 5.
8
Local structure-aware graph contrastive representation learning.基于局部结构感知的图对比表示学习。
Neural Netw. 2024 Apr;172:106083. doi: 10.1016/j.neunet.2023.12.037. Epub 2023 Dec 27.
9
Graph explicit pooling for graph-level representation learning.用于图级表示学习的图显式池化
Neural Netw. 2025 Jan;181:106790. doi: 10.1016/j.neunet.2024.106790. Epub 2024 Oct 11.
10
Co-embedding of edges and nodes with deep graph convolutional neural networks.使用深度图卷积神经网络进行边和节点的联合嵌入
Sci Rep. 2023 Oct 8;13(1):16966. doi: 10.1038/s41598-023-44224-1.

本文引用的文献

1
iPool-Information-Based Pooling in Hierarchical Graph Neural Networks.iPool——基于信息的层次图神经网络池化方法
IEEE Trans Neural Netw Learn Syst. 2022 Sep;33(9):5032-5044. doi: 10.1109/TNNLS.2021.3067441. Epub 2022 Aug 31.
2
Generalized k-core percolation on correlated and uncorrelated multiplex networks.相关和不相关多重网络上的广义k核渗流
Phys Rev E. 2020 Apr;101(4-1):042306. doi: 10.1103/PhysRevE.101.042306.
3
A Comprehensive Survey on Graph Neural Networks.图神经网络综述。
IEEE Trans Neural Netw Learn Syst. 2021 Jan;32(1):4-24. doi: 10.1109/TNNLS.2020.2978386. Epub 2021 Jan 4.