School of Computer Science and Technology, Harbin Institute of Technology, Harbin, Heilongjiang, 150001, China.
Neural Netw. 2021 Aug;140:130-135. doi: 10.1016/j.neunet.2021.02.025. Epub 2021 Mar 10.
The mechanism of message passing in graph neural networks (GNNs) is still mysterious. Apart from convolutional neural networks, no theoretical origin for GNNs has been proposed. To our surprise, message passing can be best understood in terms of power iteration. By fully or partly removing activation functions and layer weights of GNNs, we propose subspace power iteration clustering (SPIC) models that iteratively learn with only one aggregator. Experiments show that our models extend GNNs and enhance their capability to process random featured networks. Moreover, we demonstrate the redundancy of some state-of-the-art GNNs in design and define a lower limit for model evaluation by a random aggregator of message passing. Our findings push the boundaries of the theoretical understanding of neural networks.
图神经网络(GNN)中的消息传递机制仍然很神秘。除了卷积神经网络之外,还没有为 GNN 提出理论起源。令我们惊讶的是,消息传递可以通过幂迭代来很好地理解。通过完全或部分去除 GNN 的激活函数和层权重,我们提出了子空间幂迭代聚类(SPIC)模型,这些模型仅使用一个聚合器进行迭代学习。实验表明,我们的模型扩展了 GNN,并增强了它们处理随机特征网络的能力。此外,我们通过消息传递的随机聚合器证明了一些最新 GNN 在设计上的冗余性,并为模型评估定义了一个下限。我们的发现推动了对神经网络的理论理解的边界。