Suppr超能文献

图聚合-排斥网络:在异质图中不要信任所有邻居。

Graph Aggregating-Repelling Network: Do Not Trust All Neighbors in Heterophilic Graphs.

作者信息

Wang Yuhu, Wen Jinyong, Zhang Chunxia, Xiang Shiming

机构信息

State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China.

School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China.

出版信息

Neural Netw. 2024 Oct;178:106484. doi: 10.1016/j.neunet.2024.106484. Epub 2024 Jun 21.

Abstract

Graph neural networks (GNNs) have demonstrated exceptional performance in processing various types of graph data, such as citation networks and social networks, etc. Although many of these GNNs prove their superiority in handling homophilic graphs, they often overlook the other kind of widespread heterophilic graphs, in which adjacent nodes tend to have different classes or dissimilar features. Recent methods attempt to address heterophilic graphs from the graph spatial domain, which try to aggregate more similar nodes or prevent dissimilar nodes with negative weights. However, they may neglect valuable heterophilic information or extract heterophilic information ineffectively, which could cause poor performance of downstream tasks on heterophilic graphs, including node classification and graph classification, etc. Hence, a novel framework named GARN is proposed to effectively extract both homophilic and heterophilic information. First, we analyze the shortcomings of most GNNs in tackling heterophilic graphs from the perspective of graph spectral and spatial theory. Then, motivated by these analyses, a Graph Aggregating-Repelling Convolution (GARC) mechanism is designed with the objective of fusing both low-pass and high-pass graph filters. Technically, it learns positive attention weights as a low-pass filter to aggregate similar adjacent nodes, and learns negative attention weights as a high-pass filter to repel dissimilar adjacent nodes. A learnable integration weight is used to adaptively fuse these two filters and balance the proportion of the learned positive and negative weights, which could control our GARC to evolve into different types of graph filters and prevent it from over-relying on high intra-class similarity. Finally, a framework named GARN is established by simply stacking several layers of GARC to evaluate its graph representation learning ability on both the node classification and image-converted graph classification tasks. Extensive experiments conducted on multiple homophilic and heterophilic graphs and complex real-world image-converted graphs indicate the effectiveness of our proposed framework and mechanism over several representative GNN baselines.

摘要

图神经网络(GNNs)在处理各种类型的图数据(如引用网络和社交网络等)方面展现出了卓越的性能。尽管许多这类GNNs在处理同配图谱时证明了其优越性,但它们往往忽略了另一种广泛存在的异配图谱,在这种图谱中相邻节点往往具有不同的类别或不相似的特征。最近的方法试图从图空间域解决异配图谱问题,这些方法试图聚集更多相似节点或通过负权重阻止不相似节点。然而,它们可能会忽略有价值的异配信息或无法有效地提取异配信息,这可能导致在异配图谱上的下游任务(包括节点分类和图分类等)性能不佳。因此,提出了一种名为GARN的新颖框架,以有效地提取同配和异配信息。首先,我们从图谱和空间理论的角度分析了大多数GNNs在处理异配图谱时的缺点。然后,受这些分析的启发,设计了一种图聚合-排斥卷积(GARC)机制,其目标是融合低通和高通图滤波器。从技术上讲,它学习正注意力权重作为低通滤波器来聚集相似的相邻节点,并学习负注意力权重作为高通滤波器来排斥不相似的相邻节点。使用一个可学习的积分权重来自适应地融合这两个滤波器,并平衡学习到的正权重和负权重的比例,这可以控制我们的GARC演变成不同类型的图滤波器,并防止其过度依赖高类内相似度。最后,通过简单堆叠几层GARC建立了一个名为GARN的框架,以评估其在节点分类和图像转换图分类任务上的图表示学习能力。在多个同配和异配图谱以及复杂的真实世界图像转换图上进行的大量实验表明,我们提出的框架和机制相对于几个有代表性的GNN基线是有效的。

相似文献

1
Graph Aggregating-Repelling Network: Do Not Trust All Neighbors in Heterophilic Graphs.
Neural Netw. 2024 Oct;178:106484. doi: 10.1016/j.neunet.2024.106484. Epub 2024 Jun 21.
2
Exploiting Neighbor Effect: Conv-Agnostic GNN Framework for Graphs With Heterophily.
IEEE Trans Neural Netw Learn Syst. 2024 Oct;35(10):13383-13396. doi: 10.1109/TNNLS.2023.3267902. Epub 2024 Oct 7.
3
Generalized heterophily graph data augmentation for node classification.
Neural Netw. 2023 Nov;168:339-349. doi: 10.1016/j.neunet.2023.09.021. Epub 2023 Sep 19.
4
ES-GNN: Generalizing Graph Neural Networks Beyond Homophily With Edge Splitting.
IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):11345-11360. doi: 10.1109/TPAMI.2024.3459932. Epub 2024 Nov 6.
5
DPGCL: Dual pass filtering based graph contrastive learning.
Neural Netw. 2024 Nov;179:106517. doi: 10.1016/j.neunet.2024.106517. Epub 2024 Jul 11.
6
Beyond low-pass filtering on large-scale graphs via Adaptive Filtering Graph Neural Networks.
Neural Netw. 2024 Jan;169:1-10. doi: 10.1016/j.neunet.2023.09.042. Epub 2023 Oct 11.
7
Subgraph-aware graph structure revision for spatial-temporal graph modeling.
Neural Netw. 2022 Oct;154:190-202. doi: 10.1016/j.neunet.2022.07.017. Epub 2022 Jul 16.
8
Probability graph complementation contrastive learning.
Neural Netw. 2024 Nov;179:106522. doi: 10.1016/j.neunet.2024.106522. Epub 2024 Jul 9.
9
Augmented Graph Neural Network with hierarchical global-based residual connections.
Neural Netw. 2022 Jun;150:149-166. doi: 10.1016/j.neunet.2022.03.008. Epub 2022 Mar 10.
10
Beyond smoothness: A general optimization framework for graph neural networks with negative Laplacian regularization.
Neural Netw. 2024 Dec;180:106704. doi: 10.1016/j.neunet.2024.106704. Epub 2024 Sep 16.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验