IEEE Trans Image Process. 2021;30:8955-8967. doi: 10.1109/TIP.2021.3121150. Epub 2021 Oct 29.
Adversarial images are imperceptible perturbations to mislead deep neural networks (DNNs), which have attracted great attention in recent years. Although several defense strategies achieved encouraging robustness against adversarial samples, most of them still failed to consider the robustness on common corruptions (e.g. noise, blur, and weather/digital effects). To address this problem, we propose a simple yet effective method, named Progressive Diversified Augmentation (PDA), which improves the robustness of DNNs by progressively injecting diverse adversarial noises during training. In other words, DNNs trained with PDA achieve better general robustness against both adversarial attacks and common corruptions than other strategies. In addition, PDA also enjoys the advantages of spending less training time and keeping high standard accuracy on clean examples. Further, we theoretically prove that PDA can control the perturbation bound and guarantee better robustness. Extensive results on CIFAR-10, SVHN, ImageNet, CIFAR-10-C and ImageNet-C have demonstrated that PDA comprehensively outperforms its counterparts on the robustness against adversarial examples and common corruptions as well as clean images. More experiments on the frequency-based perturbations and visualized gradients further prove that PDA achieves general robustness and is more aligned with the human visual system.
对抗图像是一种难以察觉的扰动,可以误导深度神经网络(DNN),近年来引起了广泛关注。尽管已经有几种防御策略可以提高对对抗样本的鲁棒性,但大多数策略仍然没有考虑对常见失真(例如噪声、模糊和天气/数字效果)的鲁棒性。为了解决这个问题,我们提出了一种简单而有效的方法,称为渐进多样化增强(PDA),它通过在训练过程中逐步注入多样化的对抗噪声来提高 DNN 的鲁棒性。换句话说,使用 PDA 训练的 DNN 比其他策略在对抗攻击和常见失真方面具有更好的整体鲁棒性。此外,PDA 还具有训练时间短和保持干净示例高精度的优点。此外,我们从理论上证明了 PDA 可以控制扰动边界并保证更好的鲁棒性。在 CIFAR-10、SVHN、ImageNet、CIFAR-10-C 和 ImageNet-C 上的大量实验结果表明,PDA 在对抗样本和常见失真以及干净图像的鲁棒性方面全面优于其对手。基于频率的扰动和可视化梯度的进一步实验证明,PDA 实现了一般的鲁棒性,并且更符合人类视觉系统。