Department of Bioscience and Bioinformatics, Kyushu Institute of Technology, Iizuka, Fukuoka, Japan.
PLoS One. 2020 Dec 17;15(12):e0243963. doi: 10.1371/journal.pone.0243963. eCollection 2020.
Owing the epidemic of the novel coronavirus disease 2019 (COVID-19), chest X-ray computed tomography imaging is being used for effectively screening COVID-19 patients. The development of computer-aided systems based on deep neural networks (DNNs) has become an advanced open source to rapidly and accurately detect COVID-19 cases because the need for expert radiologists, who are limited in number, forms a bottleneck for screening. However, thus far, the vulnerability of DNN-based systems has been poorly evaluated, although realistic and high-risk attacks using universal adversarial perturbation (UAP), a single (input image agnostic) perturbation that can induce DNN failure in most classification tasks, are available. Thus, we focus on representative DNN models for detecting COVID-19 cases from chest X-ray images and evaluate their vulnerability to UAPs. We consider non-targeted UAPs, which cause a task failure, resulting in an input being assigned an incorrect label, and targeted UAPs, which cause the DNN to classify an input into a specific class. The results demonstrate that the models are vulnerable to non-targeted and targeted UAPs, even in the case of small UAPs. In particular, the 2% norm of the UAPs to the average norm of an image in the image dataset achieves >85% and >90% success rates for the non-targeted and targeted attacks, respectively. Owing to the non-targeted UAPs, the DNN models judge most chest X-ray images as COVID-19 cases. The targeted UAPs allow the DNN models to classify most chest X-ray images into a specified target class. The results indicate that careful consideration is required in practical applications of DNNs to COVID-19 diagnosis; in particular, they emphasize the need for strategies to address security concerns. As an example, we show that iterative fine-tuning of DNN models using UAPs improves the robustness of DNN models against UAPs.
由于 2019 年新型冠状病毒病(COVID-19)的流行,胸部 X 射线计算机断层扫描成像被用于有效筛查 COVID-19 患者。基于深度神经网络(DNN)的计算机辅助系统的发展已成为一种先进的开源方法,可以快速准确地检测 COVID-19 病例,因为数量有限的专家放射科医生形成了筛查的瓶颈。然而,到目前为止,基于 DNN 的系统的脆弱性还没有得到很好的评估,尽管存在使用通用对抗扰动(UAP)的现实和高风险攻击,UAP 是一种可以在大多数分类任务中导致 DNN 失败的单一(输入图像不可知)扰动。因此,我们专注于用于从胸部 X 射线图像中检测 COVID-19 病例的代表性 DNN 模型,并评估它们对 UAP 的脆弱性。我们考虑非目标 UAP,它会导致任务失败,从而导致输入被分配错误的标签,以及目标 UAP,它会导致 DNN 将输入分类为特定的类。结果表明,即使在 UAP 较小的情况下,模型也容易受到非目标和目标 UAP 的攻击。特别是,UAP 对图像数据集中图像平均范数的 2%范数,分别达到 >85%和 >90%的非目标和目标攻击成功率。由于非目标 UAP,DNN 模型判断大多数胸部 X 射线图像为 COVID-19 病例。目标 UAP 允许 DNN 模型将大多数胸部 X 射线图像分类到指定的目标类。结果表明,在 COVID-19 诊断中应用 DNN 时需要谨慎考虑;特别是,它们强调需要制定策略来解决安全问题。例如,我们展示了使用 UAP 对 DNN 模型进行迭代微调可以提高 DNN 模型对 UAP 的鲁棒性。