IEEE Trans Pattern Anal Mach Intell. 2022 Feb;44(2):727-739. doi: 10.1109/TPAMI.2021.3073504. Epub 2022 Jan 7.
The popularity of deep learning techniques renewed the interest in neural architectures able to process complex structures that can be represented using graphs, inspired by Graph Neural Networks (GNNs). We focus our attention on the originally proposed GNN model of Scarselli et al. 2009, which encodes the state of the nodes of the graph by means of an iterative diffusion procedure that, during the learning stage, must be computed at every epoch, until the fixed point of a learnable state transition function is reached, propagating the information among the neighbouring nodes. We propose a novel approach to learning in GNNs, based on constrained optimization in the Lagrangian framework. Learning both the transition function and the node states is the outcome of a joint process, in which the state convergence procedure is implicitly expressed by a constraint satisfaction mechanism, avoiding iterative epoch-wise procedures and the network unfolding. Our computational structure searches for saddle points of the Lagrangian in the adjoint space composed of weights, nodes state variables and Lagrange multipliers. This process is further enhanced by multiple layers of constraints that accelerate the diffusion process. An experimental analysis shows that the proposed approach compares favourably with popular models on several benchmarks.
深度学习技术的流行重新激发了人们对能够处理可以使用图表示的复杂结构的神经架构的兴趣,这受到了图神经网络(GNN)的启发。我们关注的是 Scarselli 等人在 2009 年最初提出的 GNN 模型,该模型通过迭代扩散过程对图的节点状态进行编码,在学习阶段,该过程必须在每个时期进行计算,直到达到可学习状态转移函数的平衡点,从而在相邻节点之间传播信息。我们提出了一种基于拉格朗日框架中的约束优化的 GNN 学习新方法。学习转移函数和节点状态是一个联合过程的结果,其中状态收敛过程由约束满足机制隐式表达,避免了迭代的逐时期过程和网络展开。我们的计算结构在由权重、节点状态变量和拉格朗日乘子组成的伴随空间中搜索拉格朗日的鞍点。通过多层约束进一步增强了这个过程,从而加速了扩散过程。实验分析表明,该方法在几个基准上与流行模型相比具有优势。