• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

图神经网络的多层次注意池化:统一具有多个局部性的图表示。

Multi-level attention pooling for graph neural networks: Unifying graph representations with multiple localities.

机构信息

Division of Information Science, Graduate School of Science and Technology, Nara Institute of Science and Technology, 8916-5 Takayama-Cho, Ikoma, Nara 630-0192, Japan.

出版信息

Neural Netw. 2022 Jan;145:356-373. doi: 10.1016/j.neunet.2021.11.001. Epub 2021 Nov 10.

DOI:10.1016/j.neunet.2021.11.001
PMID:34808587
Abstract

Graph neural networks (GNNs) have been widely used to learn vector representation of graph-structured data and achieved better task performance than conventional methods. The foundation of GNNs is the message passing procedure, which propagates the information in a node to its neighbors. Since this procedure proceeds one step per layer, the range of the information propagation among nodes is small in the lower layers, and it expands toward the higher layers. Therefore, a GNN model has to be deep enough to capture global structural information in a graph. On the other hand, it is known that deep GNN models suffer from performance degradation because they lose nodes' local information, which would be essential for good model performance, through many message passing steps. In this study, we propose multi-level attention pooling (MLAP) for graph-level classification tasks, which can adapt to both local and global structural information in a graph. It has an attention pooling layer for each message passing step and computes the final graph representation by unifying the layer-wise graph representations. The MLAP architecture allows models to utilize the structural information of graphs with multiple levels of localities because it preserves layer-wise information before losing them due to oversmoothing. Results of our experiments show that the MLAP architecture improves the graph classification performance compared to the baseline architectures. In addition, analyses on the layer-wise graph representations suggest that aggregating information from multiple levels of localities indeed has the potential to improve the discriminability of learned graph representations.

摘要

图神经网络 (GNN) 已被广泛用于学习图结构数据的向量表示,并在任务性能方面优于传统方法。GNN 的基础是消息传递过程,它将节点中的信息传播到其邻居。由于此过程每一层进行一步,因此在较低层中节点之间的信息传播范围较小,并且在较高层中扩展。因此,GNN 模型必须足够深,才能捕获图中的全局结构信息。另一方面,已知深度 GNN 模型会因经过多次消息传递步骤而丢失节点的局部信息而导致性能下降,而这些信息对于良好的模型性能至关重要。在这项研究中,我们提出了用于图级分类任务的多层次注意力池化 (MLAP),它可以适应图中的局部和全局结构信息。它为每个消息传递步骤都有一个注意力池化层,并通过统一层间图表示来计算最终的图表示。MLAP 架构允许模型利用具有多个局部性层次的图的结构信息,因为它在由于过度平滑而丢失它们之前保留了层间信息。我们的实验结果表明,与基线架构相比,MLAP 架构提高了图分类性能。此外,对层间图表示的分析表明,从多个局部性层次聚合信息确实有可能提高学习图表示的可辨别性。

相似文献

1
Multi-level attention pooling for graph neural networks: Unifying graph representations with multiple localities.图神经网络的多层次注意池化:统一具有多个局部性的图表示。
Neural Netw. 2022 Jan;145:356-373. doi: 10.1016/j.neunet.2021.11.001. Epub 2021 Nov 10.
2
Augmented Graph Neural Network with hierarchical global-based residual connections.基于层次全局残差连接的增强图神经网络。
Neural Netw. 2022 Jun;150:149-166. doi: 10.1016/j.neunet.2022.03.008. Epub 2022 Mar 10.
3
Harnessing collective structure knowledge in data augmentation for graph neural networks.利用图神经网络中数据增强的集体结构知识。
Neural Netw. 2024 Dec;180:106651. doi: 10.1016/j.neunet.2024.106651. Epub 2024 Aug 23.
4
Co-embedding of edges and nodes with deep graph convolutional neural networks.使用深度图卷积神经网络进行边和节点的联合嵌入
Sci Rep. 2023 Oct 8;13(1):16966. doi: 10.1038/s41598-023-44224-1.
5
Hierarchical Representation Learning in Graph Neural Networks With Node Decimation Pooling.基于节点抽取池化的图神经网络分层表示学习
IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):2195-2207. doi: 10.1109/TNNLS.2020.3044146. Epub 2022 May 2.
6
Graph Multihead Attention Pooling with Self-Supervised Learning.基于自监督学习的图多头注意力池化
Entropy (Basel). 2022 Nov 29;24(12):1745. doi: 10.3390/e24121745.
7
Deep reinforcement learning guided graph neural networks for brain network analysis.用于脑网络分析的深度强化学习引导图神经网络
Neural Netw. 2022 Oct;154:56-67. doi: 10.1016/j.neunet.2022.06.035. Epub 2022 Jul 3.
8
Locality preserving dense graph convolutional networks with graph context-aware node representations.具有图上下文感知节点表示的局部保持密集图卷积网络
Neural Netw. 2021 Nov;143:108-120. doi: 10.1016/j.neunet.2021.05.031. Epub 2021 Jun 2.
9
CCP-GNN: Competitive Covariance Pooling for Improving Graph Neural Networks.CCP-GNN:用于改进图神经网络的竞争协方差池化
IEEE Trans Neural Netw Learn Syst. 2025 Apr;36(4):6395-6406. doi: 10.1109/TNNLS.2024.3390249. Epub 2025 Apr 4.
10
Improving Attention Mechanism in Graph Neural Networks via Cardinality Preservation.通过基数保留改进图神经网络中的注意力机制
IJCAI (U S). 2020 Jul;2020:1395-1402. doi: 10.24963/ijcai.2020/194.