Kesimoglu Ziynet Nesibe, Bozdag Serdar
Department of Computer Science and Engineering, University of North Texas, Denton, TX, USA.
BioDiscovery Institute, University of North Texas, Denton, TX, USA.
Sci Rep. 2024 Nov 24;14(1):29119. doi: 10.1038/s41598-024-78555-4.
Graph Neural Networks (GNN) emerged as a deep learning framework to generate node and graph embeddings for downstream machine learning tasks. Popular GNN-based architectures operate on networks of single node and edge type. However, a large number of real-world networks include multiple types of nodes and edges. Enabling these architectures to work on networks with multiple node and edge types brings additional challenges due to the heterogeneity of the networks and the multiplicity of the existing associations. In this study, we present a framework, named GRAF (Graph Attention-aware Fusion Networks), to convert multiplex heterogeneous networks to homogeneous networks to make them more suitable for graph representation learning. Using attention-based neighborhood aggregation, GRAF learns the importance of each neighbor per node (called node-level attention) followed by the importance of each network layer (called network layer-level attention). Then, GRAF processes a network fusion step weighing each edge according to the learned attentions. After an edge elimination step based on edge weights, GRAF utilizes Graph Convolutional Networks (GCN) on the fused network and incorporates node features on graph-structured data for a node classification or a similar downstream task. To demonstrate GRAF's generalizability, we applied it to four datasets from different domains and observed that GRAF outperformed or was on par with the baselines and state-of-the-art (SOTA) methods. We were able to interpret GRAF's findings utilizing the attention weights. Source code for GRAF is publicly available at https://github.com/bozdaglab/GRAF .
图神经网络(GNN)作为一种深度学习框架出现,用于为下游机器学习任务生成节点和图嵌入。流行的基于GNN的架构在单节点和边类型的网络上运行。然而,大量的现实世界网络包含多种类型的节点和边。由于网络的异质性和现有关联的多样性,使这些架构能够在具有多种节点和边类型的网络上工作带来了额外的挑战。在本研究中,我们提出了一个名为GRAF(图注意力感知融合网络)的框架,将多重重构异构网络转换为同构网络,使其更适合于图表示学习。使用基于注意力的邻域聚合,GRAF学习每个节点的每个邻居的重要性(称为节点级注意力),然后学习每个网络层的重要性(称为网络层级注意力)。然后,GRAF根据学习到的注意力对每条边进行加权,执行网络融合步骤。在基于边权重的边消除步骤之后,GRAF在融合后的网络上使用图卷积网络(GCN),并将节点特征合并到图结构数据上,用于节点分类或类似的下游任务。为了证明GRAF的通用性,我们将其应用于来自不同领域的四个数据集,并观察到GRAF优于基线方法或与最先进(SOTA)方法相当。我们能够利用注意力权重来解释GRAF的结果。GRAF的源代码可在https://github.com/bozdaglab/GRAF上公开获取。