Suppr超能文献

深度图卷积网络(DeeperGCN):使用广义聚合函数训练深度图卷积网络

DeeperGCN: Training Deeper GCNs With Generalized Aggregation Functions.

作者信息

Li Guohao, Xiong Chenxin, Qian Guocheng, Thabet Ali, Ghanem Bernard

出版信息

IEEE Trans Pattern Anal Mach Intell. 2023 Nov;45(11):13024-13034. doi: 10.1109/TPAMI.2023.3306930. Epub 2023 Oct 3.

Abstract

Graph Neural Networks (GNNs) have been drawing significant attention to representation learning on graphs. Recent works developed frameworks to train very deep GNNs and showed impressive results in tasks like point cloud learning and protein interaction prediction. In this work, we study the performance of such deep models in large-scale graphs. In particular, we look at the effect of adequately choosing an aggregation function on deep models. We find that GNNs are very sensitive to the choice of aggregation functions (e.g. mean, max, and sum) when applied to different datasets. We systematically study and propose to alleviate this issue by introducing a novel class of aggregation functions named Generalized Aggregation Functions. The proposed functions extend beyond commonly used aggregation functions to a wide range of new permutation-invariant functions. Generalized Aggregation Functions are fully differentiable, where their parameters can be learned in an end-to-end fashion to yield a suitable aggregation function for each task. We show that equipped with the proposed aggregation functions, deep residual GNNs outperform state-of-the-art in several benchmarks from Open Graph Benchmark (OGB) across tasks and domains.

摘要

图神经网络(GNNs)在图的表示学习方面一直备受关注。最近的工作开发了用于训练极深GNN的框架,并在点云学习和蛋白质相互作用预测等任务中取得了令人瞩目的成果。在这项工作中,我们研究了此类深度模型在大规模图中的性能。特别是,我们考察了在深度模型上合理选择聚合函数的影响。我们发现,当应用于不同数据集时,GNN对聚合函数(如均值、最大值和求和)的选择非常敏感。我们系统地进行了研究,并通过引入一类名为广义聚合函数的新型聚合函数来缓解这一问题。所提出的函数超越了常用的聚合函数,涵盖了广泛的新的置换不变函数。广义聚合函数是完全可微的,其参数可以以端到端的方式进行学习,从而为每个任务产生合适的聚合函数。我们表明,配备了所提出的聚合函数后,深度残差GNN在来自开放图基准(OGB)的多个基准测试的多个任务和领域中优于现有技术。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验