Department of Computer Science, University of Miami, 1365 Memorial Drive, Coral Gables, 33146, FL, USA.
Comput Methods Programs Biomed. 2023 Oct;240:107687. doi: 10.1016/j.cmpb.2023.107687. Epub 2023 Jun 24.
Deep neural networks (DNNs) are vulnerable to adversarial noises. Adversarial training is a general and effective strategy to improve DNN robustness (i.e., accuracy on noisy data) against adversarial noises. However, DNN models trained by the current existing adversarial training methods may have much lower standard accuracy (i.e., accuracy on clean data), compared to the same models trained by the standard method on clean data, and this phenomenon is known as the trade-off between accuracy and robustness and is commonly considered unavoidable. This issue prevents adversarial training from being used in many application domains, such as medical image analysis, as practitioners do not want to sacrifice standard accuracy too much in exchange for adversarial robustness. Our objective is to lift (i.e., alleviate or even avoid) this trade-off between standard accuracy and adversarial robustness for medical image classification and segmentation.
We propose a novel adversarial training method, named Increasing-Margin Adversarial (IMA) Training, which is supported by an equilibrium state analysis about the optimality of adversarial training samples. Our method aims to preserve accuracy while improving robustness by generating optimal adversarial training samples. We evaluate our method and the other eight representative methods on six publicly available image datasets corrupted by noises generated by AutoAttack and white-noise attack.
Our method achieves the highest adversarial robustness for image classification and segmentation with the smallest reduction in accuracy on clean data. For one of the applications, our method improves both accuracy and robustness.
Our study has demonstrated that our method can lift the trade-off between standard accuracy and adversarial robustness for the image classification and segmentation applications. To our knowledge, it is the first work to show that the trade-off is avoidable for medical image segmentation.
深度神经网络(DNN)易受对抗噪声的影响。对抗训练是提高 DNN 对对抗噪声鲁棒性(即在噪声数据上的准确性)的一种通用且有效的策略。然而,与在干净数据上用标准方法训练的相同模型相比,目前现有对抗训练方法训练的 DNN 模型的标准准确率(即在干净数据上的准确率)可能要低得多,这种现象被称为准确性和鲁棒性之间的权衡,通常被认为是不可避免的。这个问题使得对抗训练无法在许多应用领域(如医学图像分析)中使用,因为从业者不愿意为了对抗鲁棒性而牺牲太多的标准准确性。我们的目标是减轻(即缓解甚至避免)医学图像分类和分割中标准准确性和对抗鲁棒性之间的这种权衡。
我们提出了一种新的对抗训练方法,名为增量边缘对抗(IMA)训练,它基于对抗训练样本最优性的平衡状态分析。我们的方法旨在通过生成最优的对抗训练样本来保持准确性,同时提高鲁棒性。我们在六个公开的图像数据集上使用噪声生成的 AutoAttack 和白噪声攻击来评估我们的方法和其他八个有代表性的方法。
我们的方法在保持干净数据上的准确性降低最小的情况下,实现了图像分类和分割的最高对抗鲁棒性。对于其中一个应用,我们的方法提高了准确性和鲁棒性。
我们的研究表明,我们的方法可以减轻医学图像分类和分割应用中标准准确性和对抗鲁棒性之间的权衡。据我们所知,这是第一个表明对抗性分割可以避免权衡的工作。