Suppr超能文献

利用剪枝和注意力机制增强医学图像分析系统的对抗防御能力。

Enhancing adversarial defense for medical image analysis systems with pruning and attention mechanism.

机构信息

Artificial Intelligence Medical Center, School of Intelligent Systems Engineering, Sun Yat-sen University, Shenzhen, China.

Department of Clinical Laboratory, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China.

出版信息

Med Phys. 2021 Oct;48(10):6198-6212. doi: 10.1002/mp.15208. Epub 2021 Sep 14.

Abstract

PURPOSE

Deep learning has achieved impressive performance across a variety of tasks, including medical image processing. However, recent research has shown that deep neural networks (DNNs) are susceptible to small adversarial perturbations in the image, which raise safety concerns about the deployment of these systems in clinical settings.

METHODS

To improve the defense of the medical imaging system against adversarial examples, we propose a new model-based defense framework for medical image DNNs model equipped with pruning and attention mechanism module based on the analysis of the reason why existing medical image DNNs models are vulnerable to attacks from adversarial examples is that complex biological texture of medical imaging and overparameterized medical image DNNs model.

RESULTS

Three benchmark medical image datasets have verified the effectiveness of our method in improving the robustness of medical image DNNs models. In the chest X-ray datasets, our defending method can even achieve up 77.18% defense rate for projected gradient descent attack and 69.49% defense rate for DeepFool attack. And through ablation experiments on the pruning module and the attention mechanism module, it is verified that the use of pruning and attention mechanism can effectively improve the robustness of the medical image DNNs model.

CONCLUSIONS

Compared with the existing model-based defense methods proposed for natural images, our defense method is more suitable for medical images. Our method can be a general strategy to approach the design of more explainable and secure medical deep learning systems, and can be widely used in various medical image tasks to improve the robustness of medical models.

摘要

目的

深度学习在各种任务中都取得了令人瞩目的表现,包括医学图像处理。然而,最近的研究表明,深度神经网络(DNN)容易受到图像中小的对抗性扰动的影响,这引发了人们对这些系统在临床环境中部署的安全性的担忧。

方法

为了提高医学成像系统对对抗样本的防御能力,我们提出了一种新的基于模型的防御框架,用于装备有剪枝和注意力机制模块的医学图像 DNN 模型。该框架的提出是基于对现有医学图像 DNN 模型易受对抗样本攻击的原因的分析,即医学成像的复杂生物纹理和过参数化的医学图像 DNN 模型。

结果

三个基准医学图像数据集验证了我们的方法在提高医学图像 DNN 模型鲁棒性方面的有效性。在胸部 X 射线数据集上,我们的防御方法甚至可以针对投影梯度下降攻击达到 77.18%的防御率,针对 DeepFool 攻击达到 69.49%的防御率。通过对剪枝模块和注意力机制模块的消融实验,验证了剪枝和注意力机制的使用可以有效地提高医学图像 DNN 模型的鲁棒性。

结论

与针对自然图像提出的现有基于模型的防御方法相比,我们的防御方法更适用于医学图像。我们的方法可以作为一种通用策略,用于设计更具可解释性和安全性的医学深度学习系统,并可以广泛应用于各种医学图像任务,以提高医学模型的鲁棒性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验