Zhejiang University of Science and Technology, Hangzhou 310023, China.
Zhejiang University of Science and Technology, Hangzhou 310023, China.
Comput Biol Med. 2023 Sep;164:107251. doi: 10.1016/j.compbiomed.2023.107251. Epub 2023 Jul 11.
Recent studies have found that medical images are vulnerable to adversarial attacks. However, it is difficult to protect medical imaging systems from adversarial examples in that the lesion features of medical images are more complex with high resolution. Therefore, a simple and effective method is needed to address these issues to improve medical imaging systems' robustness. We find that the attackers generate adversarial perturbations corresponding to the lesion characteristics of different medical image datasets, which can shift the model's attention to other places. In this paper, we propose global attention noise (GATN) injection, including global noise in the example layer and attention noise in the feature layers. Global noise enhances the lesion features of the medical images, thus keeping the examples away from the sharp areas where the model is vulnerable. The attention noise further locally smooths the model from small perturbations. According to the characteristic of medical image datasets, we introduce Global attention lesion-unrelated noise (GATN-UR) for datasets with unclear lesion boundaries and Global attention lesion-related noise (GATN-R) for datasets with clear lesion boundaries. Extensive experiments on ChestX-ray, Dermatology, and Fundoscopy datasets show that GATN improves the robustness of medical diagnosis models against a variety of powerful attacks and significantly outperforms the existing adversarial defense methods. To be specific, the robust accuracy is 86.66% on ChestX-ray, 72.49% on Dermatology, and 90.17% on Fundoscopy under PGD attack. Under the AA attack, it achieves robust accuracy of 87.70% on ChestX-ray, 66.85% on Dermatology, and 87.83% on Fundoscopy.
最近的研究发现,医学图像容易受到对抗攻击。然而,由于医学图像的病变特征更加复杂,分辨率更高,因此很难保护医学成像系统免受对抗样本的影响。因此,需要一种简单有效的方法来解决这些问题,以提高医学成像系统的鲁棒性。我们发现攻击者针对不同医学图像数据集的病变特征生成对抗性扰动,这可以将模型的注意力转移到其他地方。在本文中,我们提出了全局注意力噪声(GATN)注入,包括示例层中的全局噪声和特征层中的注意力噪声。全局噪声增强了医学图像的病变特征,从而使示例远离模型易受攻击的尖锐区域。注意力噪声进一步从微小扰动中局部平滑模型。根据医学图像数据集的特点,我们为边界不清晰的数据集引入全局注意力无关噪声(GATN-UR),为边界清晰的数据集引入全局注意力相关噪声(GATN-R)。在 ChestX-ray、Dermatology 和 Fundoscopy 数据集上的广泛实验表明,GATN 提高了医学诊断模型对各种强大攻击的鲁棒性,并显著优于现有的对抗防御方法。具体来说,在 PGD 攻击下,ChestX-ray 的鲁棒准确率为 86.66%,Dermatology 为 72.49%,Fundoscopy 为 90.17%。在 AA 攻击下,ChestX-ray 的鲁棒准确率为 87.70%,Dermatology 为 66.85%,Fundoscopy 为 87.83%。