Department of Computer Science, Aligarh Muslim University, Uttar Pradesh, Aligarh, 202002, India.
J Imaging Inform Med. 2024 Feb;37(1):308-338. doi: 10.1007/s10278-023-00916-8. Epub 2024 Jan 10.
In the realm of medical diagnostics, the utilization of deep learning techniques, notably in the context of radiology images, has emerged as a transformative force. The significance of artificial intelligence (AI), specifically machine learning (ML) and deep learning (DL), lies in their capacity to rapidly and accurately diagnose diseases from radiology images. This capability has been particularly vital during the COVID-19 pandemic, where rapid and precise diagnosis played a pivotal role in managing the spread of the virus. DL models, trained on vast datasets of radiology images, have showcased remarkable proficiency in distinguishing between normal and COVID-19-affected cases, offering a ray of hope amidst the crisis. However, as with any technological advancement, vulnerabilities emerge. Deep learning-based diagnostic models, although proficient, are not immune to adversarial attacks. These attacks, characterized by carefully crafted perturbations to input data, can potentially disrupt the models' decision-making processes. In the medical context, such vulnerabilities could have dire consequences, leading to misdiagnoses and compromised patient care. To address this, we propose a two-phase defense framework that combines advanced adversarial learning and adversarial image filtering techniques. We use a modified adversarial learning algorithm to enhance the model's resilience against adversarial examples during the training phase. During the inference phase, we apply JPEG compression to mitigate perturbations that cause misclassification. We evaluate our approach on three models based on ResNet-50, VGG-16, and Inception-V3. These models perform exceptionally in classifying radiology images (X-ray and CT) of lung regions into normal, pneumonia, and COVID-19 pneumonia categories. We then assess the vulnerability of these models to three targeted adversarial attacks: fast gradient sign method (FGSM), projected gradient descent (PGD), and basic iterative method (BIM). The results show a significant drop in model performance after the attacks. However, our defense framework greatly improves the models' resistance to adversarial attacks, maintaining high accuracy on adversarial examples. Importantly, our framework ensures the reliability of the models in diagnosing COVID-19 from clean images.
在医学诊断领域,深度学习技术的应用,尤其是在放射学图像方面,已经成为一种变革力量。人工智能(AI),特别是机器学习(ML)和深度学习(DL)的重要性在于它们能够快速准确地从放射学图像中诊断疾病。在 COVID-19 大流行期间,这种能力尤为重要,因为快速准确的诊断在控制病毒传播方面发挥了关键作用。经过放射学图像大数据集训练的 DL 模型在区分正常和 COVID-19 病例方面表现出了非凡的准确性,为危机带来了一线希望。然而,与任何技术进步一样,也存在漏洞。基于深度学习的诊断模型虽然熟练,但也不能免受对抗攻击。这些攻击的特点是对输入数据进行精心设计的干扰,可能会破坏模型的决策过程。在医学领域,这种漏洞可能会产生严重后果,导致误诊和患者护理质量下降。为了解决这个问题,我们提出了一个两阶段的防御框架,该框架结合了先进的对抗学习和对抗图像滤波技术。我们使用改进的对抗学习算法在训练阶段增强模型对对抗样本的抵抗力。在推断阶段,我们应用 JPEG 压缩来减轻导致分类错误的干扰。我们在基于 ResNet-50、VGG-16 和 Inception-V3 的三个模型上评估了我们的方法。这些模型在将肺部区域的 X 射线和 CT 放射学图像分类为正常、肺炎和 COVID-19 肺炎类别方面表现出色。然后,我们评估了这些模型对三种有针对性的对抗攻击的脆弱性:快速梯度符号法(FGSM)、投影梯度下降法(PGD)和基本迭代法(BIM)。结果表明,在攻击后模型性能显著下降。然而,我们的防御框架大大提高了模型对对抗攻击的抵抗力,在对抗样本上保持了高准确率。重要的是,我们的框架确保了模型在从干净图像诊断 COVID-19 方面的可靠性。