Kwon Hyun, Kim Dae-Jin
Department of Artificial Intelligence and Data Science, Korea Military Academy, Seoul, 01819, South Korea.
Department of Architectural Engineering, Kyung Hee University, Gyeonggi , 17101, South Korea.
Sci Rep. 2025 Jan 31;15(1):3912. doi: 10.1038/s41598-025-85493-2.
This study proposes a novel approach for generating dual-targeted adversarial examples in Graph Neural Networks (GNNs), significantly advancing the field of graph-based adversarial attacks. Unlike traditional methods that focus on inducing specific misclassifications in a single model, our approach creates adversarial samples that can simultaneously target multiple models, each inducing distinct misclassifications. This innovation addresses a critical gap in existing techniques by enabling adversarial attacks that are capable of affecting various models with different objectives. We provide a detailed explanation of the method's principles and structure, rigorously evaluate its effectiveness across several GNN models, and visualize the impact using datasets such as Reddit and OGBN-Products. Our contributions highlight the potential for dual-targeted attacks to disrupt GNN performance and emphasize the need for enhanced defensive strategies in graph-based learning systems.
本研究提出了一种在图神经网络(GNN)中生成双目标对抗样本的新方法,显著推动了基于图的对抗攻击领域的发展。与专注于在单个模型中诱导特定错误分类的传统方法不同,我们的方法创建的对抗样本可以同时针对多个模型,每个模型都会诱导出不同的错误分类。这一创新通过实现能够影响具有不同目标的各种模型的对抗攻击,解决了现有技术中的一个关键空白。我们详细解释了该方法的原理和结构,在多个GNN模型上严格评估了其有效性,并使用Reddit和OGBN-Products等数据集直观展示了其影响。我们的贡献突出了双目标攻击破坏GNN性能的潜力,并强调了在基于图的学习系统中加强防御策略的必要性。