Cai Wei, Di Xingyu, Wang Xin, Gao Weijie, Jia Haoran
The Third Faculty of Xi'an Research Institute of High Technology, Xi'an 710064, China.
Entropy (Basel). 2024 Oct 24;26(11):903. doi: 10.3390/e26110903.
Adversarial attacks that mislead deep neural networks (DNNs) into making incorrect predictions can also be implemented in the physical world. However, most of the existing adversarial camouflage textures that attack object detection models only consider the effectiveness of the attack, ignoring the stealthiness of adversarial attacks, resulting in the generated adversarial camouflage textures appearing abrupt to human observers. To address this issue, we propose a style transfer module added to an adversarial texture generation framework. By calculating the style loss between the texture and the specified style image, the adversarial texture generated by the model is guided to have good stealthiness and is not easily detected by DNNs and human observers in specific scenes. Experiments have shown that in both the digital and physical worlds, the vehicle full coverage adversarial camouflage texture we create has good stealthiness and can effectively fool advanced DNN object detectors while evading human observers in specific scenes.
误导深度神经网络(DNN)做出错误预测的对抗攻击同样可以在现实世界中实施。然而,现有的大多数攻击目标检测模型的对抗伪装纹理仅考虑攻击的有效性,而忽略了对抗攻击的隐蔽性,导致生成的对抗伪装纹理在人类观察者看来显得突兀。为了解决这个问题,我们提出了一个添加到对抗纹理生成框架中的风格迁移模块。通过计算纹理与指定风格图像之间的风格损失,引导模型生成的对抗纹理具有良好的隐蔽性,在特定场景中不易被DNN和人类观察者检测到。实验表明,在数字世界和现实世界中,我们创建的车辆全覆盖对抗伪装纹理都具有良好的隐蔽性,能够有效愚弄先进的DNN目标检测器,同时在特定场景中躲避人类观察者。