Suppr超能文献

人工智能诊断模型对抗图像攻击下的机器和人工读者研究

A machine and human reader study on AI diagnosis model safety under attacks of adversarial images.

机构信息

Department of Radiology, University of Pittsburgh, Pittsburgh, PA, 15213, USA.

College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023, China.

出版信息

Nat Commun. 2021 Dec 14;12(1):7281. doi: 10.1038/s41467-021-27577-x.

Abstract

While active efforts are advancing medical artificial intelligence (AI) model development and clinical translation, safety issues of the AI models emerge, but little research has been done. We perform a study to investigate the behaviors of an AI diagnosis model under adversarial images generated by Generative Adversarial Network (GAN) models and to evaluate the effects on human experts when visually identifying potential adversarial images. Our GAN model makes intentional modifications to the diagnosis-sensitive contents of mammogram images in deep learning-based computer-aided diagnosis (CAD) of breast cancer. In our experiments the adversarial samples fool the AI-CAD model to output a wrong diagnosis on 69.1% of the cases that are initially correctly classified by the AI-CAD model. Five breast imaging radiologists visually identify 29%-71% of the adversarial samples. Our study suggests an imperative need for continuing research on medical AI model's safety issues and for developing potential defensive solutions against adversarial attacks.

摘要

虽然正在积极努力推进医学人工智能 (AI) 模型的开发和临床转化,但 AI 模型的安全问题已经显现,而这方面的研究却很少。我们进行了一项研究,以调查对抗生成网络 (GAN) 模型生成的对抗图像对 AI 诊断模型的行为影响,并评估其对视觉识别潜在对抗图像的人类专家的影响。我们的 GAN 模型对深度学习辅助乳腺癌计算机辅助诊断 (CAD) 中的乳腺 X 线图像的诊断敏感内容进行了有意修改。在我们的实验中,对抗样本使得 AI-CAD 模型在 69.1%的情况下输出错误诊断,而这些情况在最初是被 AI-CAD 模型正确分类的。五位乳腺影像学放射科医生对 29%-71%的对抗样本进行了视觉识别。我们的研究表明,迫切需要继续研究医学 AI 模型的安全问题,并开发针对对抗攻击的潜在防御解决方案。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验