Srinivasan Srinitish, OmKumar Chandraumakantham
School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, India.
Sci Rep. 2025 Apr 29;15(1):14998. doi: 10.1038/s41598-025-97956-7.
Graph Neural Networks have gained popularity over the past few years. Their ability to model relationships between entities of the same and different kind, represent molecules, model flow etc. have made them a go to tool for researchers. However, owing to the abstract nature of graphs, there exists no ideal transformation to represent nodes and edges in the euclidean space. Moreover, GNNs are highly susceptible to adversarial attacks. However, a gradient based attack based on latent space embeddings does not exist in the GNN literature. Such attacks, classified as white box attacks, tamper with latent space representation of graphs without creating any noticeable difference in the overall distribution. Developing and testing GNN models based on such attacks on graph classification tasks would enable researchers to understand and develop stronger and more robust classification systems. Further, adversarial attack tests in the GNN literature have been performed on weaker, less representative neural network architectures. In order to tackle these gaps in literature, we propose a white box gradient based attack developed from contrastive latent space representations. Further, we develop a strong base(victim) learning spectral and spatial properties of graphs with consideration of isomorphic properties. We experimentally validate this model on 4 benchmark datasets in the molecular property prediction literature where our model outperformed over 75% of all LLM-based architectures. On attacking this model with our proposed adversarial attack strategy, the overall performance drops at an average of 25% thereby clearing a few gaps in the existent literature. The code for our paper can be found at https://github.com/Deceptrax123/An-edge-sensitivity-based-gradient-attack-on-GIN-for-inductive-problems.
在过去几年中,图神经网络越来越受欢迎。它们能够对同类和不同类实体之间的关系进行建模、表示分子、模拟流程等,这使其成为研究人员的首选工具。然而,由于图的抽象性质,不存在在欧几里得空间中表示节点和边的理想变换。此外,图神经网络极易受到对抗性攻击。然而,图神经网络文献中不存在基于潜在空间嵌入的梯度攻击。这种攻击被归类为白盒攻击,它会篡改图的潜在空间表示,而不会在整体分布上产生任何明显差异。基于此类攻击在图分类任务上开发和测试图神经网络模型,将使研究人员能够理解并开发出更强健、更稳健的分类系统。此外,图神经网络文献中的对抗性攻击测试是在较弱、代表性较差的神经网络架构上进行的。为了弥补文献中的这些不足,我们提出了一种基于对比潜在空间表示的白盒梯度攻击。此外,我们开发了一个强大的基础(受害者)模型,该模型在考虑同构属性的情况下学习图的光谱和空间属性。我们在分子属性预测文献中的4个基准数据集上对该模型进行了实验验证,我们的模型在所有基于语言模型的架构中超过了75%。用我们提出的对抗性攻击策略攻击这个模型时,整体性能平均下降25%,从而弥补了现有文献中的一些不足。本文的代码可在https://github.com/Deceptrax123/An-edge-sensitivity-based-gradient-attack-on-GIN-for-inductive-problems上找到。