• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于图卷积网络的无导数优化对抗攻击

Derivative-free optimization adversarial attacks for graph convolutional networks.

作者信息

Yang Runze, Long Teng

机构信息

School of Information Engineering, China University of Geosciences, Beijing, China.

出版信息

PeerJ Comput Sci. 2021 Aug 24;7:e693. doi: 10.7717/peerj-cs.693. eCollection 2021.

DOI:10.7717/peerj-cs.693
PMID:34541312
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8409335/
Abstract

In recent years, graph convolutional networks (GCNs) have emerged rapidly due to their excellent performance in graph data processing. However, recent researches show that GCNs are vulnerable to adversarial attacks. An attacker can maliciously modify edges or nodes of the graph to mislead the model's classification of the target nodes, or even cause a degradation of the model's overall classification performance. In this paper, we first propose a black-box adversarial attack framework based on derivative-free optimization (DFO) to generate graph adversarial examples without using gradient and apply advanced DFO algorithms conveniently. Second, we implement a direct attack algorithm (DFDA) using the Nevergrad library based on the framework. Additionally, we overcome the problem of large search space by redesigning the perturbation vector using constraint size. Finally, we conducted a series of experiments on different datasets and parameters. The results show that DFDA outperforms Nettack in most cases, and it can achieve an average attack success rate of more than 95% on the Cora dataset when perturbing at most eight edges. This demonstrates that our framework can fully exploit the potential of DFO methods in node classification adversarial attacks.

摘要

近年来,图卷积网络(GCN)因其在图数据处理方面的卓越性能而迅速兴起。然而,最近的研究表明,GCN容易受到对抗性攻击。攻击者可以恶意修改图的边或节点,以误导模型对目标节点的分类,甚至导致模型整体分类性能下降。在本文中,我们首先提出了一种基于无导数优化(DFO)的黑盒对抗性攻击框架,以在不使用梯度的情况下生成图对抗性示例,并方便地应用先进的DFO算法。其次,我们基于该框架使用Nevergrad库实现了一种直接攻击算法(DFDA)。此外,我们通过使用约束大小重新设计扰动向量来克服搜索空间过大的问题。最后,我们在不同的数据集和参数上进行了一系列实验。结果表明,DFDA在大多数情况下优于Nettack,并且在Cora数据集上最多扰动八条边时,平均攻击成功率可以达到95%以上。这表明我们的框架可以充分发挥DFO方法在节点分类对抗性攻击中的潜力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/7043a403b72f/peerj-cs-07-693-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/0dd7fd4dc453/peerj-cs-07-693-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/1bb2eae22209/peerj-cs-07-693-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/e05db49cf1e0/peerj-cs-07-693-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/9b63191ab1be/peerj-cs-07-693-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/68881dec84fb/peerj-cs-07-693-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/87851cb3517b/peerj-cs-07-693-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/70c242822f1d/peerj-cs-07-693-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/5326b203a186/peerj-cs-07-693-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/2c4f2c39ec7c/peerj-cs-07-693-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/589c99a6faa0/peerj-cs-07-693-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/13cdf99d8006/peerj-cs-07-693-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/e4529517f361/peerj-cs-07-693-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/7b803389062e/peerj-cs-07-693-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/9db43df655e0/peerj-cs-07-693-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/57a5db996682/peerj-cs-07-693-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/7043a403b72f/peerj-cs-07-693-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/0dd7fd4dc453/peerj-cs-07-693-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/1bb2eae22209/peerj-cs-07-693-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/e05db49cf1e0/peerj-cs-07-693-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/9b63191ab1be/peerj-cs-07-693-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/68881dec84fb/peerj-cs-07-693-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/87851cb3517b/peerj-cs-07-693-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/70c242822f1d/peerj-cs-07-693-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/5326b203a186/peerj-cs-07-693-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/2c4f2c39ec7c/peerj-cs-07-693-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/589c99a6faa0/peerj-cs-07-693-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/13cdf99d8006/peerj-cs-07-693-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/e4529517f361/peerj-cs-07-693-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/7b803389062e/peerj-cs-07-693-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/9db43df655e0/peerj-cs-07-693-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/57a5db996682/peerj-cs-07-693-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ae0/8409335/7043a403b72f/peerj-cs-07-693-g016.jpg

相似文献

1
Derivative-free optimization adversarial attacks for graph convolutional networks.用于图卷积网络的无导数优化对抗攻击
PeerJ Comput Sci. 2021 Aug 24;7:e693. doi: 10.7717/peerj-cs.693. eCollection 2021.
2
A Distributed Black-Box Adversarial Attack Based on Multi-Group Particle Swarm Optimization.基于多群组粒子群优化的分布式黑盒对抗攻击。
Sensors (Basel). 2020 Dec 14;20(24):7158. doi: 10.3390/s20247158.
3
AN-GCN: An Anonymous Graph Convolutional Network Against Edge-Perturbing Attacks.AN-GCN:一种抵御边扰动攻击的匿名图卷积网络
IEEE Trans Neural Netw Learn Syst. 2022 May 13;PP. doi: 10.1109/TNNLS.2022.3172296.
4
A Dual Robust Graph Neural Network Against Graph Adversarial Attacks.一种对抗图对抗攻击的双重鲁棒图神经网络。
Neural Netw. 2024 Jul;175:106276. doi: 10.1016/j.neunet.2024.106276. Epub 2024 Mar 28.
5
HyGloadAttack: Hard-label black-box textual adversarial attacks via hybrid optimization.HyGloadAttack:通过混合优化实现的硬标签黑盒文本对抗攻击。
Neural Netw. 2024 Oct;178:106461. doi: 10.1016/j.neunet.2024.106461. Epub 2024 Jun 12.
6
MCGCL:Adversarial attack on graph contrastive learning based on momentum gradient candidates.MCGCL:基于动量梯度候选的图对比学习对抗攻击。
PLoS One. 2024 Jun 6;19(6):e0302327. doi: 10.1371/journal.pone.0302327. eCollection 2024.
7
ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers.ABC攻击:一种用于欺骗深度图像分类器的无梯度优化黑盒攻击。
Entropy (Basel). 2022 Mar 15;24(3):412. doi: 10.3390/e24030412.
8
Vulnerability of classifiers to evolutionary generated adversarial examples.分类器对进化生成对抗样例的脆弱性。
Neural Netw. 2020 Jul;127:168-181. doi: 10.1016/j.neunet.2020.04.015. Epub 2020 Apr 20.
9
Black-box attacks on dynamic graphs via adversarial topology perturbations.通过对抗性拓扑扰动对动态图进行黑盒攻击。
Neural Netw. 2024 Mar;171:308-319. doi: 10.1016/j.neunet.2023.11.060. Epub 2023 Dec 1.
10
Node injection for class-specific network poisoning.节点注入用于特定类别的网络中毒。
Neural Netw. 2023 Sep;166:236-247. doi: 10.1016/j.neunet.2023.07.025. Epub 2023 Jul 22.

本文引用的文献

1
A Bayesian graph convolutional network for reliable prediction of molecular properties with uncertainty quantification.一种用于通过不确定性量化可靠预测分子性质的贝叶斯图卷积网络。
Chem Sci. 2019 Jul 22;10(36):8438-8446. doi: 10.1039/c9sc01992h. eCollection 2019 Sep 28.
2
The graph neural network model.图神经网络模型。
IEEE Trans Neural Netw. 2009 Jan;20(1):61-80. doi: 10.1109/TNN.2008.2005605. Epub 2008 Dec 9.