• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

节点注入用于特定类别的网络中毒。

Node injection for class-specific network poisoning.

机构信息

Indraprastha Institute of Information Technology, Delhi, India.

Indian Institute of Technology, Delhi, India.

出版信息

Neural Netw. 2023 Sep;166:236-247. doi: 10.1016/j.neunet.2023.07.025. Epub 2023 Jul 22.

DOI:10.1016/j.neunet.2023.07.025
PMID:37517358
Abstract

Graph Neural Networks (GNNs) are powerful in learning rich network representations that aid the performance of downstream tasks. However, recent studies showed that GNNs are vulnerable to adversarial attacks involving node injection and network perturbation. Among these, node injection attacks are more practical as they do not require manipulation in the existing network and can be performed more realistically. In this paper, we propose a novel problem statement - a class-specific poison attack on graphs in which the attacker aims to misclassify specific nodes in the target class into a different class using node injection. Additionally, nodes are injected in such a way that they camouflage as benign nodes. We propose NICKI, a novel attacking strategy that utilizes an optimization-based approach to sabotage the performance of GNN-based node classifiers. NICKI works in two phases - it first learns the node representation and then generates the features and edges of the injected nodes. Extensive experiments and ablation studies on four benchmark networks show that NICKI is consistently better than four baseline attacking strategies for misclassifying nodes in the target class. We also show that the injected nodes are properly camouflaged as benign, thus making the poisoned graph indistinguishable from its clean version w.r.t various topological properties.

摘要

图神经网络(GNN)在学习丰富的网络表示方面非常强大,这有助于下游任务的性能。然而,最近的研究表明,GNN 容易受到涉及节点注入和网络扰动的对抗攻击。在这些攻击中,节点注入攻击更为实际,因为它们不需要对现有网络进行操作,并且可以更真实地执行。在本文中,我们提出了一个新的问题陈述——针对图的特定类别毒化攻击,攻击者的目标是使用节点注入将目标类中的特定节点错误分类为不同的类别。此外,以节点伪装成良性节点的方式注入节点。我们提出了 NICKI,这是一种新颖的攻击策略,利用基于优化的方法来破坏基于 GNN 的节点分类器的性能。NICKI 分两个阶段工作——它首先学习节点表示,然后生成注入节点的特征和边。在四个基准网络上进行的广泛实验和消融研究表明,NICKI 在错误分类目标类中的节点方面始终优于四种基线攻击策略。我们还表明,注入的节点被适当伪装为良性节点,从而使中毒图在各种拓扑属性方面与干净图无法区分。

相似文献

1
Node injection for class-specific network poisoning.节点注入用于特定类别的网络中毒。
Neural Netw. 2023 Sep;166:236-247. doi: 10.1016/j.neunet.2023.07.025. Epub 2023 Jul 22.
2
A Dual Robust Graph Neural Network Against Graph Adversarial Attacks.一种对抗图对抗攻击的双重鲁棒图神经网络。
Neural Netw. 2024 Jul;175:106276. doi: 10.1016/j.neunet.2024.106276. Epub 2024 Mar 28.
3
Spectral adversarial attack on graph via node injection.基于节点注入的图的频谱对抗攻击。
Neural Netw. 2025 Apr;184:107046. doi: 10.1016/j.neunet.2024.107046. Epub 2025 Jan 1.
4
Augmented Graph Neural Network with hierarchical global-based residual connections.基于层次全局残差连接的增强图神经网络。
Neural Netw. 2022 Jun;150:149-166. doi: 10.1016/j.neunet.2022.03.008. Epub 2022 Mar 10.
5
Graph Transformer Networks: Learning meta-path graphs to improve GNNs.图 Transformer 网络:学习元路径图以改进 GNNs。
Neural Netw. 2022 Sep;153:104-119. doi: 10.1016/j.neunet.2022.05.026. Epub 2022 Jun 4.
6
SP-GNN: Learning structure and position information from graphs.SP-GNN:从图中学习结构和位置信息。
Neural Netw. 2023 Apr;161:505-514. doi: 10.1016/j.neunet.2023.01.051. Epub 2023 Feb 4.
7
Graph Aggregating-Repelling Network: Do Not Trust All Neighbors in Heterophilic Graphs.图聚合-排斥网络:在异质图中不要信任所有邻居。
Neural Netw. 2024 Oct;178:106484. doi: 10.1016/j.neunet.2024.106484. Epub 2024 Jun 21.
8
Explanatory subgraph attacks against Graph Neural Networks.解释子图攻击对图神经网络的影响。
Neural Netw. 2024 Apr;172:106097. doi: 10.1016/j.neunet.2024.106097. Epub 2024 Jan 23.
9
Black-box attacks on dynamic graphs via adversarial topology perturbations.通过对抗性拓扑扰动对动态图进行黑盒攻击。
Neural Netw. 2024 Mar;171:308-319. doi: 10.1016/j.neunet.2023.11.060. Epub 2023 Dec 1.
10
Harnessing collective structure knowledge in data augmentation for graph neural networks.利用图神经网络中数据增强的集体结构知识。
Neural Netw. 2024 Dec;180:106651. doi: 10.1016/j.neunet.2024.106651. Epub 2024 Aug 23.

引用本文的文献

1
DGHSA: derivative graph-based hypergraph structure attack.DGHSA:基于导数图的超图结构攻击。
Sci Rep. 2024 Dec 4;14(1):30222. doi: 10.1038/s41598-024-79824-y.