Division of Information Science, Graduate School of Science and Technology, Nara Institute of Science and Technology, 8916-5 Takayama-Cho, Ikoma, Nara 630-0192, Japan.
Neural Netw. 2022 Jan;145:356-373. doi: 10.1016/j.neunet.2021.11.001. Epub 2021 Nov 10.
Graph neural networks (GNNs) have been widely used to learn vector representation of graph-structured data and achieved better task performance than conventional methods. The foundation of GNNs is the message passing procedure, which propagates the information in a node to its neighbors. Since this procedure proceeds one step per layer, the range of the information propagation among nodes is small in the lower layers, and it expands toward the higher layers. Therefore, a GNN model has to be deep enough to capture global structural information in a graph. On the other hand, it is known that deep GNN models suffer from performance degradation because they lose nodes' local information, which would be essential for good model performance, through many message passing steps. In this study, we propose multi-level attention pooling (MLAP) for graph-level classification tasks, which can adapt to both local and global structural information in a graph. It has an attention pooling layer for each message passing step and computes the final graph representation by unifying the layer-wise graph representations. The MLAP architecture allows models to utilize the structural information of graphs with multiple levels of localities because it preserves layer-wise information before losing them due to oversmoothing. Results of our experiments show that the MLAP architecture improves the graph classification performance compared to the baseline architectures. In addition, analyses on the layer-wise graph representations suggest that aggregating information from multiple levels of localities indeed has the potential to improve the discriminability of learned graph representations.
图神经网络 (GNN) 已被广泛用于学习图结构数据的向量表示,并在任务性能方面优于传统方法。GNN 的基础是消息传递过程,它将节点中的信息传播到其邻居。由于此过程每一层进行一步,因此在较低层中节点之间的信息传播范围较小,并且在较高层中扩展。因此,GNN 模型必须足够深,才能捕获图中的全局结构信息。另一方面,已知深度 GNN 模型会因经过多次消息传递步骤而丢失节点的局部信息而导致性能下降,而这些信息对于良好的模型性能至关重要。在这项研究中,我们提出了用于图级分类任务的多层次注意力池化 (MLAP),它可以适应图中的局部和全局结构信息。它为每个消息传递步骤都有一个注意力池化层,并通过统一层间图表示来计算最终的图表示。MLAP 架构允许模型利用具有多个局部性层次的图的结构信息,因为它在由于过度平滑而丢失它们之前保留了层间信息。我们的实验结果表明,与基线架构相比,MLAP 架构提高了图分类性能。此外,对层间图表示的分析表明,从多个局部性层次聚合信息确实有可能提高学习图表示的可辨别性。