IEEE Trans Image Process. 2022;31:5691-5705. doi: 10.1109/TIP.2022.3201472. Epub 2022 Sep 2.
Recent research shows deep neural networks are vulnerable to different types of attacks, such as adversarial attacks, data poisoning attacks, and backdoor attacks. Among them, backdoor attacks are the most cunning and can occur in almost every stage of the deep learning pipeline. Backdoor attacks have attracted lots of interest from both academia and industry. However, most existing backdoor attack methods are visible or fragile to some effortless pre-processing such as common data transformations. To address these limitations, we propose a robust and invisible backdoor attack called "Poison Ink". Concretely, we first leverage the image structures as target poisoning areas and fill them with poison ink (information) to generate the trigger pattern. As the image structure can keep its semantic meaning during the data transformation, such a trigger pattern is inherently robust to data transformations. Then we leverage a deep injection network to embed such input-aware trigger pattern into the cover image to achieve stealthiness. Compared to existing popular backdoor attack methods, Poison Ink outperforms both in stealthiness and robustness. Through extensive experiments, we demonstrate that Poison Ink is not only general to different datasets and network architectures but also flexible for different attack scenarios. Besides, it also has very strong resistance against many state-of-the-art defense techniques.
最近的研究表明,深度神经网络容易受到不同类型的攻击,如对抗攻击、数据中毒攻击和后门攻击。其中,后门攻击最为狡猾,可以在深度学习管道的几乎每个阶段发生。后门攻击引起了学术界和工业界的广泛关注。然而,大多数现有的后门攻击方法对于一些简单的预处理(如常见的数据转换)是可见的或脆弱的。为了解决这些限制,我们提出了一种名为“毒墨”的稳健且不可见的后门攻击。具体来说,我们首先利用图像结构作为目标中毒区域,并在其中填充毒墨(信息)以生成触发模式。由于图像结构在数据转换过程中可以保持其语义意义,因此这种触发模式天生对数据转换具有鲁棒性。然后,我们利用深度注入网络将这种输入感知的触发模式嵌入到覆盖图像中,以实现隐蔽性。与现有的流行后门攻击方法相比,毒墨在隐蔽性和鲁棒性方面都表现出色。通过广泛的实验,我们证明了毒墨不仅对不同的数据集和网络架构具有通用性,而且对不同的攻击场景也具有灵活性。此外,它还对许多最先进的防御技术具有很强的抵抗力。