• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于层次全局残差连接的增强图神经网络。

Augmented Graph Neural Network with hierarchical global-based residual connections.

机构信息

Laboratory of Computer Science and Mathematics and their Applications (LIMA), Faculty of Science, University Chouaib Doukkali, El Jadida 24000, Morocco.

Laboratory of Intelligent Systems, Georesources and Renewable Energies (SIGER), University Sidi Mohamed Ben Abdellah, Fez, Morocco.

出版信息

Neural Netw. 2022 Jun;150:149-166. doi: 10.1016/j.neunet.2022.03.008. Epub 2022 Mar 10.

DOI:10.1016/j.neunet.2022.03.008
PMID:35313247
Abstract

Graph Neural Networks (GNNs) are powerful architectures for learning on graphs. They are efficient for predicting nodes, links and graphs properties. Standard GNN variants follow a message passing schema to update nodes representations using information from higher-order neighborhoods iteratively. Consequently, deeper GNNs make it possible to define high-level nodes representations generated based on local as well as distant neighborhoods. However, deeper networks are prone to suffer from over-smoothing. To build deeper GNN architectures and avoid losing the dependency between lower (the layers closer to the input) and higher (the layers closer to the output) layers, networks can integrate residual connections to connect intermediate layers. We propose the Augmented Graph Neural Network (AGNN) model with hierarchical global-based residual connections. Using the proposed residual connections, the model generates high-level nodes representations without the need for a deeper architecture. We disclose that the nodes representations generated through our proposed AGNN model are able to define an expressive all-encompassing representation of the entire graph. As such, the graph predictions generated through the AGNN model surpass considerably state-of-the-art results. Moreover, we carry out extensive experiments to identify the best global pooling strategy and attention weights to define the adequate hierarchical and global-based residual connections for different graph property prediction tasks. Furthermore, we propose a reversible variant of the AGNN model to address the extensive memory consumption problem that typically arises from training networks on large and dense graph datasets. The proposed Reversible Augmented Graph Neural Network (R-AGNN) only stores the nodes representations acquired from the output layer as opposed to saving all representations from intermediate layers as it is conventionally done when optimizing the parameters of other GNNs. We further refine the definition of the backpropagation algorithm to fit the R-AGNN model. We evaluate the proposed models AGNN and R-AGNN on benchmark Molecular, Bioinformatics and Social Networks datasets for graph classification and achieve state-of-the-art results. For instance the AGNN model realizes improvements of +39% on IMDB-MULTI reaching 91.7% accuracy and +16% on COLLAB reaching 96.8% accuracy compared to other GNN variants.

摘要

图神经网络(GNN)是用于图上学习的强大架构。它们在预测节点、链接和图属性方面非常高效。标准的 GNN 变体遵循消息传递模式,使用高阶邻域的信息迭代更新节点表示。因此,更深的 GNN 可以定义基于局部和远程邻域的高级节点表示。然而,更深的网络更容易受到过度平滑的影响。为了构建更深的 GNN 架构并避免丢失较低(靠近输入的层)和较高(靠近输出的层)层之间的依赖性,网络可以集成残差连接来连接中间层。我们提出了具有分层全局残差连接的增强图神经网络(AGNN)模型。使用提出的残差连接,模型无需更深的架构即可生成高级节点表示。我们揭示了通过我们提出的 AGNN 模型生成的节点表示能够定义整个图的富有表现力的全面表示。因此,通过 AGNN 模型生成的图预测大大超过了最先进的结果。此外,我们进行了广泛的实验,以确定最佳的全局池化策略和注意力权重,以定义不同图属性预测任务的适当分层和全局残差连接。此外,我们提出了 AGNN 模型的可逆变体,以解决在大型密集图数据集上训练网络时通常出现的大量内存消耗问题。所提出的可逆增强图神经网络(R-AGNN)仅存储从输出层获得的节点表示,而不是像传统方法那样存储中间层的所有表示,因为这是优化其他 GNN 参数时的传统做法。我们进一步细化了反向传播算法的定义,以适应 R-AGNN 模型。我们在基准分子、生物信息学和社交网络数据集上评估了所提出的模型 AGNN 和 R-AGNN 进行图分类,并实现了最先进的结果。例如,与其他 GNN 变体相比,AGNN 模型在 IMDB-MULTI 上实现了 +39%的改进,达到了 91.7%的准确率,在 COLLAB 上实现了 +16%的改进,达到了 96.8%的准确率。

相似文献

1
Augmented Graph Neural Network with hierarchical global-based residual connections.基于层次全局残差连接的增强图神经网络。
Neural Netw. 2022 Jun;150:149-166. doi: 10.1016/j.neunet.2022.03.008. Epub 2022 Mar 10.
2
Multi-level attention pooling for graph neural networks: Unifying graph representations with multiple localities.图神经网络的多层次注意池化:统一具有多个局部性的图表示。
Neural Netw. 2022 Jan;145:356-373. doi: 10.1016/j.neunet.2021.11.001. Epub 2021 Nov 10.
3
Hierarchical Representation Learning in Graph Neural Networks With Node Decimation Pooling.基于节点抽取池化的图神经网络分层表示学习
IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):2195-2207. doi: 10.1109/TNNLS.2020.3044146. Epub 2022 May 2.
4
k-hop graph neural networks.k 跳图神经网络。
Neural Netw. 2020 Oct;130:195-205. doi: 10.1016/j.neunet.2020.07.008. Epub 2020 Jul 10.
5
Auto-GNN: Neural architecture search of graph neural networks.自动图神经网络:图神经网络的神经架构搜索
Front Big Data. 2022 Nov 17;5:1029307. doi: 10.3389/fdata.2022.1029307. eCollection 2022.
6
Graph Transformer Networks: Learning meta-path graphs to improve GNNs.图 Transformer 网络:学习元路径图以改进 GNNs。
Neural Netw. 2022 Sep;153:104-119. doi: 10.1016/j.neunet.2022.05.026. Epub 2022 Jun 4.
7
PSA-GNN: An augmented GNN framework with priori subgraph knowledge.PSA-GNN:基于先验子图知识的增强图神经网络框架。
Neural Netw. 2024 May;173:106155. doi: 10.1016/j.neunet.2024.106155. Epub 2024 Feb 4.
8
AGNN: Alternating Graph-Regularized Neural Networks to Alleviate Over-Smoothing.AGNN:交替图正则化神经网络以减轻过平滑问题
IEEE Trans Neural Netw Learn Syst. 2024 Oct;35(10):13764-13776. doi: 10.1109/TNNLS.2023.3271623. Epub 2024 Oct 7.
9
Harnessing collective structure knowledge in data augmentation for graph neural networks.利用图神经网络中数据增强的集体结构知识。
Neural Netw. 2024 Dec;180:106651. doi: 10.1016/j.neunet.2024.106651. Epub 2024 Aug 23.
10
SP-GNN: Learning structure and position information from graphs.SP-GNN:从图中学习结构和位置信息。
Neural Netw. 2023 Apr;161:505-514. doi: 10.1016/j.neunet.2023.01.051. Epub 2023 Feb 4.

引用本文的文献

1
A Transformer-Based Ensemble Framework for the Prediction of Protein-Protein Interaction Sites.一种基于Transformer的蛋白质-蛋白质相互作用位点预测集成框架。
Research (Wash D C). 2023 Sep 27;6:0240. doi: 10.34133/research.0240. eCollection 2023.