Zhang Xiao, Bao Peng
School of Software Engineering, Beijing Jiaotong University, Beijing, 100081, China.
Neural Netw. 2025 Jul 15;192:107868. doi: 10.1016/j.neunet.2025.107868.
Graph neural networks (GNNs) have shown great power in graph-related tasks, while recent studies have shown that GNNs are vulnerable to adversarial attacks. Therefore, developing a robust GNN framework has become a popular research topic. Current defense methods based on structure purification or robust networks are usually limited to feature information and single views, which tend to ignore critical information. To address these challenges, we conduct an in-depth study on local and global information on graphs and propose Multi-view Contrastive Learning for Graph Adversarial Defense (COLA) to improve the robustness of the model. On the one hand, we propose to use edge directionality and graph diffusion to generate two augmented views based on the structure, features, and supervised information of the graph. On the other hand, we use multi-view contrastive learning to encode local and global information by constructing different contrast paths to obtain reliable node representations. We validate the effectiveness of COLA on seven benchmark datasets, including four homophilic graphs and three heterophilic graphs. The results show that COLA successfully resists various attacks and outperforms the state-of-the-art baselines.
图神经网络(GNN)在与图相关的任务中展现出了强大的能力,而最近的研究表明,GNN容易受到对抗攻击。因此,开发一个强大的GNN框架已成为一个热门的研究课题。当前基于结构净化或鲁棒网络的防御方法通常局限于特征信息和单一视图,往往会忽略关键信息。为应对这些挑战,我们对图的局部和全局信息进行了深入研究,并提出了用于图对抗防御的多视图对比学习(COLA)方法,以提高模型的鲁棒性。一方面,我们建议基于图的结构、特征和监督信息,利用边的方向性和图扩散来生成两个增强视图。另一方面,我们使用多视图对比学习,通过构建不同的对比路径来编码局部和全局信息,以获得可靠的节点表示。我们在七个基准数据集上验证了COLA的有效性,其中包括四个同配图谱和三个异配图谱。结果表明,COLA成功抵御了各种攻击,并且优于当前最先进的基线方法。