Chang YouKang, Zhao Hong, Wang Weijie
School of Computer and Communication, Lanzhou University of Technology, LanZhou, GanSu, China.
PeerJ Comput Sci. 2023 Jan 13;9:e1197. doi: 10.7717/peerj-cs.1197. eCollection 2023.
Vision Transformer (ViT) models have achieved good results in computer vision tasks, their performance has been shown to exceed that of convolutional neural networks (CNNs). However, the robustness of the ViT model has been less studied recently. To address this problem, we investigate the robustness of the ViT model in the face of adversarial attacks, and enhance the robustness of the model by introducing the ResNet- SE module, which acts on the Attention module of the ViT model. The Attention module not only learns edge and line information, but also can extract increasingly complex feature information; ResNet-SE module highlights the important information of each feature map and suppresses the minor information, which helps the model to perform the extraction of key features. The experimental results show that the accuracy of the proposed defense method is 19.812%, 17.083%, 18.802%, 21.490%, and 18.010% against Basic Iterative Method (BIM), C&W, DeepFool, DIFGSM, and MDIFGSM attacks, respectively. The defense method in this paper shows strong robustness compared with several other models.
视觉Transformer(ViT)模型在计算机视觉任务中取得了良好的效果,其性能已被证明超过了卷积神经网络(CNN)。然而,ViT模型的鲁棒性最近研究较少。为了解决这个问题,我们研究了ViT模型在面对对抗攻击时的鲁棒性,并通过引入ResNet-SE模块来增强模型的鲁棒性,该模块作用于ViT模型的注意力模块。注意力模块不仅学习边缘和线条信息,还能提取日益复杂的特征信息;ResNet-SE模块突出每个特征图的重要信息并抑制次要信息,这有助于模型进行关键特征的提取。实验结果表明,所提出的防御方法针对基本迭代方法(BIM)、C&W、DeepFool、DIFGSM和MDIFGSM攻击的准确率分别为19.812%、17.083%、18.802%、21.490%和18.010%。与其他几个模型相比,本文中的防御方法表现出很强的鲁棒性。