Suppr超能文献

用于具有稀疏标签的鲁棒图神经网络的对比消息传递

Contrastive message passing for robust graph neural networks with sparse labels.

作者信息

Yan Hui, Gao Yuan, Ai Guoguo, Wang Huan, Li Xin

机构信息

School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210000, Jiangsu, China.

School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210000, Jiangsu, China.

出版信息

Neural Netw. 2025 Feb;182:106912. doi: 10.1016/j.neunet.2024.106912. Epub 2024 Nov 19.

Abstract

Graph Neural Networks (GNNs) have achieved great success in semi-supervised learning. Existing GNNs typically aggregate the features via message passing with the aid of rich labels. However, real-world graphs have limited labels, and overfitting weakens the classification ability when labels are insufficient. Besides, traditional message passing is sensitive to structure noise, such as perturbations on edges. The performance of GNNs would drop sharply when trained on such graphs. To mitigate these issues, we present a noise-resistant framework via contrastive message passing. Except for the limited labelled nodes as supervision widely used in GNNs, we model the topology structure by graph likelihood as extra supervision. Specifically, we first propose contrastive graph likelihood, which is defined as a product of the edge likelihood on all connected node pairs. Then, we apply two unfolding updated steps via descent iterations. The first step updates the features in a single view with the aid of initializing the edge probability. Then the second step applies a binary edge for homophily and heterophily views, respectively. The homophily view applies attractive force to pull the positive-connected nodes close; otherwise, the heterophily view utilizes repulsive force to push away the negative-connected nodes. Extensive experiments show that our method has superior performance on semi-supervised node classification tasks with sparse labels and excellent robustness under perturbations in structure.

摘要

图神经网络(GNN)在半监督学习中取得了巨大成功。现有的GNN通常借助丰富的标签通过消息传递来聚合特征。然而,现实世界中的图标签有限,当标签不足时,过拟合会削弱分类能力。此外,传统的消息传递对结构噪声敏感,例如边的扰动。在这样的图上训练时,GNN的性能会急剧下降。为了缓解这些问题,我们提出了一种通过对比消息传递的抗噪声框架。除了GNN中广泛用作监督的有限标记节点外,我们将图似然性建模为额外的监督来表示拓扑结构。具体来说,我们首先提出对比图似然性,它被定义为所有相连节点对上边似然性的乘积。然后,我们通过下降迭代应用两个展开更新步骤。第一步借助初始化边概率在单个视图中更新特征。然后第二步分别对同质性和异质性视图应用二元边。同质性视图施加吸引力将正连接的节点拉近;否则,异质性视图利用排斥力推开负连接的节点。大量实验表明,我们的方法在具有稀疏标签的半监督节点分类任务上具有卓越性能,并且在结构扰动下具有出色的鲁棒性。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验