Department of Radiology, University of Pittsburgh, Pittsburgh, PA, 15213, USA.
College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023, China.
Nat Commun. 2021 Dec 14;12(1):7281. doi: 10.1038/s41467-021-27577-x.
While active efforts are advancing medical artificial intelligence (AI) model development and clinical translation, safety issues of the AI models emerge, but little research has been done. We perform a study to investigate the behaviors of an AI diagnosis model under adversarial images generated by Generative Adversarial Network (GAN) models and to evaluate the effects on human experts when visually identifying potential adversarial images. Our GAN model makes intentional modifications to the diagnosis-sensitive contents of mammogram images in deep learning-based computer-aided diagnosis (CAD) of breast cancer. In our experiments the adversarial samples fool the AI-CAD model to output a wrong diagnosis on 69.1% of the cases that are initially correctly classified by the AI-CAD model. Five breast imaging radiologists visually identify 29%-71% of the adversarial samples. Our study suggests an imperative need for continuing research on medical AI model's safety issues and for developing potential defensive solutions against adversarial attacks.
虽然正在积极努力推进医学人工智能 (AI) 模型的开发和临床转化,但 AI 模型的安全问题已经显现,而这方面的研究却很少。我们进行了一项研究,以调查对抗生成网络 (GAN) 模型生成的对抗图像对 AI 诊断模型的行为影响,并评估其对视觉识别潜在对抗图像的人类专家的影响。我们的 GAN 模型对深度学习辅助乳腺癌计算机辅助诊断 (CAD) 中的乳腺 X 线图像的诊断敏感内容进行了有意修改。在我们的实验中,对抗样本使得 AI-CAD 模型在 69.1%的情况下输出错误诊断,而这些情况在最初是被 AI-CAD 模型正确分类的。五位乳腺影像学放射科医生对 29%-71%的对抗样本进行了视觉识别。我们的研究表明,迫切需要继续研究医学 AI 模型的安全问题,并开发针对对抗攻击的潜在防御解决方案。