School of Software, South China University of Technology, Guangzhou, Guangdong, 510006, China; Pazhou Lab, Guangzhou, Guangdong, 510006, China.
School of Software, South China University of Technology, Guangzhou, Guangdong, 510006, China.
Neural Netw. 2024 Jul;175:106276. doi: 10.1016/j.neunet.2024.106276. Epub 2024 Mar 28.
Graph Neural Networks (GNNs) have gained widespread usage and achieved remarkable success in various real-world applications. Nevertheless, recent studies reveal the vulnerability of GNNs to graph adversarial attacks that fool them by modifying graph structure. This vulnerability undermines the robustness of GNNs and poses significant security and privacy risks across various applications. Hence, it is crucial to develop robust GNN models that can effectively defend against such attacks. One simple approach is to remodel the graph. However, most existing methods cannot fully preserve the similarity relationship among the original nodes while learning the node representation required for reweighting the edges. Furthermore, they lack supervision information regarding adversarial perturbations, hampering their ability to recognize adversarial edges. To address these limitations, we propose a novel Dual Robust Graph Neural Network (DualRGNN) against graph adversarial attacks. DualRGNN first incorporates a node-similarity-preserving graph refining (SPGR) module to prune and refine the graph based on the learned node representations, which contain the original nodes' similarity relationships, weakening the poisoning of graph adversarial attacks on graph data. DualRGNN then employs an adversarial-supervised graph attention (ASGAT) network to enhance the model's capability in identifying adversarial edges by treating these edges as supervised signals. Through extensive experiments conducted on four benchmark datasets, DualRGNN has demonstrated remarkable robustness against various graph adversarial attacks.
图神经网络(GNN)在各种实际应用中得到了广泛的应用,并取得了显著的成功。然而,最近的研究揭示了 GNN 容易受到图对抗攻击的影响,这些攻击通过修改图结构来欺骗 GNN。这种脆弱性削弱了 GNN 的鲁棒性,并在各种应用中带来了重大的安全和隐私风险。因此,开发能够有效抵御此类攻击的稳健 GNN 模型至关重要。一种简单的方法是对图进行重新建模。然而,大多数现有的方法在学习重新加权边所需的节点表示时,无法完全保留原始节点之间的相似性关系。此外,它们缺乏关于对抗性扰动的监督信息,从而限制了它们识别对抗性边的能力。为了解决这些限制,我们提出了一种新的针对图对抗攻击的双稳健图神经网络(DualRGNN)。DualRGNN 首先结合了节点相似性保留图精炼(SPGR)模块,根据学习到的节点表示对图进行修剪和精炼,这些表示包含原始节点的相似性关系,从而削弱图对抗攻击对图数据的毒害。然后,DualRGNN 使用对抗性监督图注意力(ASGAT)网络通过将这些边视为监督信号,来增强模型识别对抗性边的能力。通过在四个基准数据集上进行的广泛实验,DualRGNN 展示了对各种图对抗攻击的显著鲁棒性。