Ai Xing, Sun Chengyu, Zhang Zhihong, Hancock Edwin R
IEEE Trans Neural Netw Learn Syst. 2024 Apr;35(4):4593-4606. doi: 10.1109/TNNLS.2022.3144343. Epub 2024 Apr 4.
Graph neural networks (GNNs) are recently proposed neural network structures for the processing of graph-structured data. Due to their employed neighbor aggregation strategy, existing GNNs focus on capturing node-level information and neglect high-level information. Existing GNNs, therefore, suffer from representational limitations caused by the local permutation invariance (LPI) problem. To overcome these limitations and enrich the features captured by GNNs, we propose a novel GNN framework, referred to as the two-level GNN (TL-GNN). This merges subgraph-level information with node-level information. Moreover, we provide a mathematical analysis of the LPI problem, which demonstrates that subgraph-level information is beneficial to overcoming the problems associated with LPI. A subgraph counting method based on the dynamic programming algorithm is also proposed, and this has the time complexity of O(n) , where n is the number of nodes of a graph. Experiments show that TL-GNN outperforms existing GNNs and achieves state-of-the-art performance.
图神经网络(GNNs)是最近提出的用于处理图结构数据的神经网络结构。由于其采用的邻居聚合策略,现有的GNNs专注于捕获节点级信息而忽略了高级信息。因此,现有的GNNs受到由局部排列不变性(LPI)问题导致的表示限制。为了克服这些限制并丰富GNNs捕获的特征,我们提出了一种新颖的GNN框架,称为两级GNN(TL-GNN)。它将子图级信息与节点级信息合并。此外,我们对LPI问题进行了数学分析,结果表明子图级信息有助于克服与LPI相关的问题。还提出了一种基于动态规划算法的子图计数方法,其时间复杂度为O(n),其中n是图的节点数。实验表明,TL-GNN优于现有的GNNs,并实现了当前的最优性能。