Suppr超能文献

基于噪声学习的隐式对抗数据增强与鲁棒性

Implicit adversarial data augmentation and robustness with Noise-based Learning.

作者信息

Panda Priyadarshini, Roy Kaushik

机构信息

Department of Electrical Engineering, New Haven, Yale University, USA.

School of Electrical and Computer Engineering, Purdue University, West Lafayette, USA.

出版信息

Neural Netw. 2021 Sep;141:120-132. doi: 10.1016/j.neunet.2021.04.008. Epub 2021 Apr 20.

Abstract

We introduce a Noise-based Learning (NoL) approach for training neural networks that are intrinsically robust to adversarial attacks. We find that the learning of random noise introduced with the input with the same loss function used during posterior maximization, improves a model's adversarial resistance. We show that the learnt noise performs implicit adversarial data augmentation boosting a model's adversary generalization capability. We evaluate our approach's efficacy and provide a simplistic visualization tool for understanding adversarial data, using Principal Component Analysis. We conduct comprehensive experiments on prevailing benchmarks such as MNIST, CIFAR10, CIFAR100, Tiny ImageNet and show that our approach performs remarkably well against a wide range of attacks. Furthermore, combining NoL with state-of-the-art defense mechanisms, such as adversarial training, consistently outperforms prior techniques in both white-box and black-box attacks.

摘要

我们介绍了一种基于噪声的学习(NoL)方法,用于训练对对抗性攻击具有内在鲁棒性的神经网络。我们发现,在最大化后验期间使用相同的损失函数,将随机噪声与输入一起引入进行学习,可以提高模型的对抗性抵抗力。我们表明,学习到的噪声执行隐式对抗性数据增强,提高了模型的对抗性泛化能力。我们评估了我们方法的有效性,并使用主成分分析提供了一个简单的可视化工具来理解对抗性数据。我们在MNIST、CIFAR10、CIFAR100、Tiny ImageNet等主流基准上进行了全面实验,结果表明我们的方法在面对各种攻击时表现出色。此外,将NoL与对抗训练等先进防御机制相结合,在白盒和黑盒攻击中均始终优于先前的技术。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验