Ma Linhai, Chen Jiasong, Qian Linchen, Liang Liang
Department of Computer Science, University of Miami, 1365 Memorial Drive, Coral Gables, 33146, FL, USA.
Proc SPIE Int Soc Opt Eng. 2024 Feb;12926. doi: 10.1117/12.3006534. Epub 2024 Apr 2.
It is known that deep neural networks (DNNs) are vulnerable to adversarial noises. Improving adversarial robustness of DNNs is essential. This is not only because unperceivable adversarial noise is a threat to the performance of DNNs models, but also adversarially robust DNNs have a strong resistance to the white noises that may present everywhere in the actual world. To improve adversarial robustness of DNNs, a variety of adversarial training methods have been proposed. Most of the previous methods are designed under one single application scenario: image classification. However, image segmentation, landmark detection, and object detection are more commonly observed than classifying the entire images in the medical imaging field. Although classification tasks and other tasks (e.g., regression) share some similarities, they also differ in certain ways, e.g., some adversarial training methods use misclassification criteria, which is well-defined in classification but not in regression. These restrictions/limitations hinder application of adversarial training for many medical imaging analysis tasks. In our work, the contributions are as follows: (1) We investigated the existing adversarial training methods and discovered the challenges that make those methods unsuitable for adaptation in segmentation and detection tasks. (2) We modified and adapted some existing adversarial training methods for medical image segmentation and detection tasks. (3) We proposed a general adversarial training method for medical image segmentation and detection. (4) We implemented our method in diverse medical imaging tasks using publicly available datasets, including MRI segmentation, Cephalometric landmark detection, and blood cell detection. The experiments substantiated the effectiveness of our method.
众所周知,深度神经网络(DNN)容易受到对抗噪声的影响。提高DNN的对抗鲁棒性至关重要。这不仅是因为不可察觉的对抗噪声对DNN模型的性能构成威胁,而且具有对抗鲁棒性的DNN对现实世界中可能随处出现的白噪声具有很强的抵抗力。为了提高DNN的对抗鲁棒性,人们提出了各种对抗训练方法。以前的大多数方法都是在单一应用场景下设计的:图像分类。然而,在医学成像领域,图像分割、地标检测和目标检测比整个图像分类更为常见。尽管分类任务和其他任务(如回归)有一些相似之处,但它们在某些方面也存在差异,例如,一些对抗训练方法使用误分类标准,这在分类中定义明确,但在回归中却没有。这些限制阻碍了对抗训练在许多医学成像分析任务中的应用。在我们的工作中,贡献如下:(1)我们研究了现有的对抗训练方法,发现了使这些方法不适用于分割和检测任务的挑战。(2)我们对一些现有的对抗训练方法进行了修改和调整,以适用于医学图像分割和检测任务。(3)我们提出了一种适用于医学图像分割和检测的通用对抗训练方法。(4)我们使用公开可用的数据集在各种医学成像任务中实现了我们的方法,包括MRI分割、头影测量地标检测和血细胞检测。实验证实了我们方法的有效性。