Jagatap Gauri, Joshi Ameya, Chowdhury Animesh Basak, Garg Siddharth, Hegde Chinmay
Electrical and Computer Engineering, New York University, New York, NY, United States.
Front Artif Intell. 2022 Jan 4;4:780843. doi: 10.3389/frai.2021.780843. eCollection 2021.
In this paper we propose a new family of algorithms, ATENT, for training adversarially robust deep neural networks. We formulate a new loss function that is equipped with an additional entropic regularization. Our loss function considers the contribution of adversarial samples that are drawn from a specially designed distribution in the data space that assigns high probability to points with high loss and in the immediate neighborhood of training samples. Our proposed algorithms optimize this loss to seek adversarially robust valleys of the loss landscape. Our approach achieves competitive (or better) performance in terms of robust classification accuracy as compared to several state-of-the-art robust learning approaches on benchmark datasets such as MNIST and CIFAR-10.
在本文中,我们提出了一种新的算法家族ATENT,用于训练对抗鲁棒的深度神经网络。我们制定了一个新的损失函数,该函数配备了额外的熵正则化。我们的损失函数考虑了从数据空间中专门设计的分布中抽取的对抗样本的贡献,该分布将高概率分配给损失高的点以及训练样本的紧邻邻域中的点。我们提出的算法优化此损失,以寻找损失景观中的对抗鲁棒谷。与MNIST和CIFAR-10等基准数据集上的几种最新鲁棒学习方法相比,我们的方法在鲁棒分类准确率方面实现了有竞争力(或更好)的性能。