Li Chenwei, Zhang Hengwei, Yang Bo, Wang Jindong
State Key Laboratory of Mathematical Engineering and Advanced Computing, Zhengzhou, Henan, China.
Henan Key Laboratory of Information Security, Zhengzhou, Henan, China.
PeerJ Comput Sci. 2023 Jul 25;9:e1475. doi: 10.7717/peerj-cs.1475. eCollection 2023.
Convolutional neural networks have achieved great success in computer vision, but incorrect predictions would be output when applying intended perturbations on original input. These human-indistinguishable replicas are called adversarial examples, which on this feature can be used to evaluate network robustness and security. White-box attack success rate is considerable, when already knowing network structure and parameters. But in a black-box attack, the adversarial examples success rate is relatively low and the transferability remains to be improved. This article refers to model augmentation which is derived from data augmentation in training generalizable neural networks, and proposes resizing invariance method. The proposed method introduces improved resizing transformation to achieve model augmentation. In addition, ensemble models are used to generate more transferable adversarial examples. Extensive experiments verify the better performance of this method in comparison to other baseline methods including the original model augmentation method, and the black-box attack success rate is improved on both the normal models and defense models.
卷积神经网络在计算机视觉领域取得了巨大成功,但在对原始输入应用特定扰动时会输出错误预测。这些人类难以区分的副本被称为对抗样本,基于此特性可用于评估网络的鲁棒性和安全性。在已知网络结构和参数的情况下,白盒攻击成功率相当可观。但在黑盒攻击中,对抗样本的成功率相对较低,其可迁移性仍有待提高。本文借鉴了在训练可泛化神经网络时从数据增强衍生而来的模型增强方法,并提出了尺寸不变性方法。该方法引入改进的尺寸变换以实现模型增强。此外,使用集成模型来生成更多可迁移的对抗样本。大量实验验证了该方法与包括原始模型增强方法在内的其他基线方法相比具有更好的性能,并且在正常模型和防御模型上黑盒攻击成功率均有所提高。