Suppr超能文献

通过幂迭代聚类理解图神经网络中的消息传递。

Understanding the message passing in graph neural networks via power iteration clustering.

机构信息

School of Computer Science and Technology, Harbin Institute of Technology, Harbin, Heilongjiang, 150001, China.

出版信息

Neural Netw. 2021 Aug;140:130-135. doi: 10.1016/j.neunet.2021.02.025. Epub 2021 Mar 10.

Abstract

The mechanism of message passing in graph neural networks (GNNs) is still mysterious. Apart from convolutional neural networks, no theoretical origin for GNNs has been proposed. To our surprise, message passing can be best understood in terms of power iteration. By fully or partly removing activation functions and layer weights of GNNs, we propose subspace power iteration clustering (SPIC) models that iteratively learn with only one aggregator. Experiments show that our models extend GNNs and enhance their capability to process random featured networks. Moreover, we demonstrate the redundancy of some state-of-the-art GNNs in design and define a lower limit for model evaluation by a random aggregator of message passing. Our findings push the boundaries of the theoretical understanding of neural networks.

摘要

图神经网络(GNN)中的消息传递机制仍然很神秘。除了卷积神经网络之外,还没有为 GNN 提出理论起源。令我们惊讶的是,消息传递可以通过幂迭代来很好地理解。通过完全或部分去除 GNN 的激活函数和层权重,我们提出了子空间幂迭代聚类(SPIC)模型,这些模型仅使用一个聚合器进行迭代学习。实验表明,我们的模型扩展了 GNN,并增强了它们处理随机特征网络的能力。此外,我们通过消息传递的随机聚合器证明了一些最新 GNN 在设计上的冗余性,并为模型评估定义了一个下限。我们的发现推动了对神经网络的理论理解的边界。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验