Chen Yang, Ye Zhonglin, Wang Zhaoyang, Lin Jingjing, Zhao Haixing
School of Computer Science and Technology, Shandong Technology and Business University, Yantai, 264005, Shandong, China.
School of Computer Science, Qinghai Normal University, Xining, 810008, Qinghai, China.
Sci Rep. 2024 Dec 4;14(1):30222. doi: 10.1038/s41598-024-79824-y.
Hypergraph Neural Networks (HGNNs) have been significantly successful in higher-order tasks. However, recent study have shown that they are also vulnerable to adversarial attacks like Graph Neural Networks. Attackers fool HGNNs by modifying node links in hypergraphs. Existing adversarial attacks on HGNNs only consider feasibility in the targeted attack, and there is no discussion on the untargeted attack with higher practicality. To close this gap, we propose a derivative graph-based hypergraph attack, namely DGHSA, which focuses on reducing the global performance of HGNNs. Specifically, DGHSA consists of two models: candidate set generation and evaluation. The gradients of the incidence matrix are obtained by training HGNNs, and then the candidate set is obtained by modifying the hypergraph structure with the gradient rules. In the candidate set evaluation module, DGHSA uses the derivative graph metric to assess the impact of attacks on the similarity of candidate hypergraphs, and finally selects the candidate hypergraph with the worst node similarity as the optimal perturbation hypergraph. We have conducted extensive experiments on four commonly used datasets, and the results show that DGHSA can significantly degrade the performance of HGNNs on node classification tasks.
超图神经网络(HGNNs)在高阶任务中取得了显著成功。然而,最近的研究表明,它们也像图神经网络一样容易受到对抗攻击。攻击者通过修改超图中的节点链接来欺骗HGNNs。现有的针对HGNNs的对抗攻击仅考虑目标攻击中的可行性,而对于更具实用性的非目标攻击则没有讨论。为了弥补这一差距,我们提出了一种基于导数图的超图攻击,即DGHSA,它专注于降低HGNNs的全局性能。具体来说,DGHSA由两个模型组成:候选集生成和评估。通过训练HGNNs获得关联矩阵的梯度,然后根据梯度规则修改超图结构来获得候选集。在候选集评估模块中,DGHSA使用导数图度量来评估攻击对候选超图相似性的影响,最后选择节点相似性最差的候选超图作为最优扰动超图。我们在四个常用数据集上进行了广泛的实验,结果表明DGHSA可以显著降低HGNNs在节点分类任务上的性能。