Suppr超能文献

图神经网络逃避攻击中的双目标对抗样本

Dual-Targeted adversarial example in evasion attack on graph neural networks.

作者信息

Kwon Hyun, Kim Dae-Jin

机构信息

Department of Artificial Intelligence and Data Science, Korea Military Academy, Seoul, 01819, South Korea.

Department of Architectural Engineering, Kyung Hee University, Gyeonggi , 17101, South Korea.

出版信息

Sci Rep. 2025 Jan 31;15(1):3912. doi: 10.1038/s41598-025-85493-2.

Abstract

This study proposes a novel approach for generating dual-targeted adversarial examples in Graph Neural Networks (GNNs), significantly advancing the field of graph-based adversarial attacks. Unlike traditional methods that focus on inducing specific misclassifications in a single model, our approach creates adversarial samples that can simultaneously target multiple models, each inducing distinct misclassifications. This innovation addresses a critical gap in existing techniques by enabling adversarial attacks that are capable of affecting various models with different objectives. We provide a detailed explanation of the method's principles and structure, rigorously evaluate its effectiveness across several GNN models, and visualize the impact using datasets such as Reddit and OGBN-Products. Our contributions highlight the potential for dual-targeted attacks to disrupt GNN performance and emphasize the need for enhanced defensive strategies in graph-based learning systems.

摘要

本研究提出了一种在图神经网络(GNN)中生成双目标对抗样本的新方法,显著推动了基于图的对抗攻击领域的发展。与专注于在单个模型中诱导特定错误分类的传统方法不同,我们的方法创建的对抗样本可以同时针对多个模型,每个模型都会诱导出不同的错误分类。这一创新通过实现能够影响具有不同目标的各种模型的对抗攻击,解决了现有技术中的一个关键空白。我们详细解释了该方法的原理和结构,在多个GNN模型上严格评估了其有效性,并使用Reddit和OGBN-Products等数据集直观展示了其影响。我们的贡献突出了双目标攻击破坏GNN性能的潜力,并强调了在基于图的学习系统中加强防御策略的必要性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e8e/11785780/749c1087dad4/41598_2025_85493_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验