Wu Lirong, Lin Haitao, Zhao Guojiang, Tan Cheng, Li Stan Z
IEEE Trans Neural Netw Learn Syst. 2024 Sep 30;PP. doi: 10.1109/TNNLS.2024.3458405.
Recent years have witnessed great success in handling graph-related tasks with graph neural networks (GNNs). However, most existing GNNs are based on message passing to perform feature aggregation and transformation, where the structural information is explicitly involved in the forward propagation by coupling with node features through graph convolution at each layer. As a result, subtle feature noise or structure perturbation may cause severe error propagation, resulting in extremely poor robustness. In this article, we rethink the roles played by graph structural information in graph data training and identify that message passing is not the only path to modeling structural information. Inspired by this, we propose a simple but effective graph structure self-contrasting (GSSC) framework that learns graph structural information without message passing. The proposed framework is based purely on multilayer perceptrons (MLPs), where the structural information is only implicitly incorporated as prior knowledge to guide the computation of supervision signals, substituting the explicit message propagation as in GNNs. Specifically, it first applies structural sparsification (STR-Sparse) to remove potentially uninformative or noisy edges in the neighborhood, and then performs structural self-contrasting (STR-Contrast) in the sparsified neighborhood to learn robust node representations. Finally, STR-Sparse and self-contrasting are formulated as a bilevel optimization problem and solved in a unified framework. Extensive experiments have qualitatively and quantitatively demonstrated that the GSSC framework can produce truly encouraging performance with better generalization and robustness than other leading competitors. Codes are publicly available at: https://github.com/LirongWu/GSSC.
近年来,图神经网络(GNN)在处理与图相关的任务方面取得了巨大成功。然而,大多数现有的GNN基于消息传递来执行特征聚合和转换,其中结构信息通过在每一层通过图卷积与节点特征耦合而明确地参与前向传播。结果,细微的特征噪声或结构扰动可能导致严重的误差传播,从而导致极差的鲁棒性。在本文中,我们重新思考图结构信息在图数据训练中所起的作用,并确定消息传递不是建模结构信息的唯一途径。受此启发,我们提出了一个简单而有效的图结构自对比(GSSC)框架,该框架无需消息传递即可学习图结构信息。所提出的框架纯粹基于多层感知器(MLP),其中结构信息仅作为先验知识隐式地纳入,以指导监督信号的计算,取代了GNN中明确的消息传播。具体而言,它首先应用结构稀疏化(STR-Sparse)来去除邻域中潜在的无信息或有噪声的边,然后在稀疏化的邻域中执行结构自对比(STR-Contrast)以学习鲁棒的节点表示。最后,将STR-Sparse和自对比公式化为一个双层优化问题,并在一个统一的框架中求解。大量实验已经定性和定量地证明,GSSC框架能够产生真正令人鼓舞的性能,与其他领先的竞争对手相比,具有更好的泛化能力和鲁棒性。代码可在以下网址公开获取:https://github.com/LirongWu/GSSC。