College of Intelligence and Computing, Tianjin University, Tianjin, China.
College of Intelligence and Computing, Tianjin University, Tianjin, China.
Neural Netw. 2024 Dec;180:106668. doi: 10.1016/j.neunet.2024.106668. Epub 2024 Aug 29.
Unsupervised graph learning techniques have garnered increasing interest among researchers. These methods employ the technique of maximizing mutual information to generate representations of nodes and graphs. We show that these methods are susceptible to backdoor attacks, wherein the adversary can poison a small portion of unlabeled graph data (e.g., node features and graph structure) by introducing triggers into the graph. This tampering disrupts the representations and increases the risk to various downstream applications. Previous backdoor attacks in supervised learning primarily operate directly on the label space and may not be suitable for unlabeled graph data. To tackle this challenge, we introduce GRBA, a gradient-based first-order backdoor attack method. To the best of our knowledge, this constitutes a pioneering endeavor in investigating backdoor attacks within the domain of unsupervised graph learning. The initiation of this method does not necessitate prior knowledge of downstream tasks, as it directly focuses on representations. Furthermore, it is versatile and can be applied to various downstream tasks, including node classification, node clustering and graph classification. We evaluate GRBA on state-of-the-art unsupervised learning models, and the experimental results substantiate the effectiveness and evasiveness of GRBA in both node-level and graph-level tasks.
无监督图学习技术在研究人员中引起了越来越多的关注。这些方法采用最大化互信息的技术来生成节点和图的表示。我们表明,这些方法容易受到后门攻击的影响,其中攻击者可以通过在图中引入触发器来毒害一小部分未标记的图数据(例如,节点特征和图结构)。这种篡改会破坏表示并增加各种下游应用的风险。以前在监督学习中的后门攻击主要直接在标签空间上进行操作,可能不适合未标记的图数据。为了解决这个挑战,我们引入了 GRBA,一种基于梯度的一阶后门攻击方法。据我们所知,这是在无监督图学习领域中研究后门攻击的开创性尝试。该方法的启动不需要下游任务的先验知识,因为它直接关注表示。此外,它具有通用性,可以应用于各种下游任务,包括节点分类、节点聚类和图分类。我们在最先进的无监督学习模型上评估了 GRBA,实验结果证实了 GRBA 在节点级和图级任务中的有效性和逃避性。