Bianchi Filippo Maria, Grattarola Daniele, Livi Lorenzo, Alippi Cesare
IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):2195-2207. doi: 10.1109/TNNLS.2020.3044146. Epub 2022 May 2.
In graph neural networks (GNNs), pooling operators compute local summaries of input graphs to capture their global properties, and they are fundamental for building deep GNNs that learn hierarchical representations. In this work, we propose the Node Decimation Pooling (NDP), a pooling operator for GNNs that generates coarser graphs while preserving the overall graph topology. During training, the GNN learns new node representations and fits them to a pyramid of coarsened graphs, which is computed offline in a preprocessing stage. NDP consists of three steps. First, a node decimation procedure selects the nodes belonging to one side of the partition identified by a spectral algorithm that approximates the MAXCUT solution. Afterward, the selected nodes are connected with Kron reduction to form the coarsened graph. Finally, since the resulting graph is very dense, we apply a sparsification procedure that prunes the adjacency matrix of the coarsened graph to reduce the computational cost in the GNN. Notably, we show that it is possible to remove many edges without significantly altering the graph structure. Experimental results show that NDP is more efficient compared to state-of-the-art graph pooling operators while reaching, at the same time, competitive performance on a significant variety of graph classification tasks.
在图神经网络(GNN)中,池化算子计算输入图的局部摘要以捕捉其全局属性,并且它们是构建学习层次表示的深度GNN的基础。在这项工作中,我们提出了节点抽取池化(NDP),这是一种用于GNN的池化算子,它在保留整体图拓扑结构的同时生成更粗糙的图。在训练期间,GNN学习新的节点表示并将它们拟合到一个由粗化图组成的金字塔中,该金字塔在预处理阶段离线计算。NDP由三个步骤组成。首先,一个节点抽取过程选择属于由近似MAXCUT解决方案的谱算法确定的分区一侧的节点。之后,通过Kron约简将所选节点连接起来以形成粗化图。最后,由于得到的图非常密集,我们应用一种稀疏化过程,该过程修剪粗化图的邻接矩阵以降低GNN中的计算成本。值得注意的是,我们表明可以去除许多边而不会显著改变图结构。实验结果表明,与现有的图池化算子相比,NDP更有效,同时在各种图分类任务中达到了有竞争力的性能。