• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

利用剪枝和注意力机制增强医学图像分析系统的对抗防御能力。

Enhancing adversarial defense for medical image analysis systems with pruning and attention mechanism.

机构信息

Artificial Intelligence Medical Center, School of Intelligent Systems Engineering, Sun Yat-sen University, Shenzhen, China.

Department of Clinical Laboratory, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China.

出版信息

Med Phys. 2021 Oct;48(10):6198-6212. doi: 10.1002/mp.15208. Epub 2021 Sep 14.

DOI:10.1002/mp.15208
PMID:34487364
Abstract

PURPOSE

Deep learning has achieved impressive performance across a variety of tasks, including medical image processing. However, recent research has shown that deep neural networks (DNNs) are susceptible to small adversarial perturbations in the image, which raise safety concerns about the deployment of these systems in clinical settings.

METHODS

To improve the defense of the medical imaging system against adversarial examples, we propose a new model-based defense framework for medical image DNNs model equipped with pruning and attention mechanism module based on the analysis of the reason why existing medical image DNNs models are vulnerable to attacks from adversarial examples is that complex biological texture of medical imaging and overparameterized medical image DNNs model.

RESULTS

Three benchmark medical image datasets have verified the effectiveness of our method in improving the robustness of medical image DNNs models. In the chest X-ray datasets, our defending method can even achieve up 77.18% defense rate for projected gradient descent attack and 69.49% defense rate for DeepFool attack. And through ablation experiments on the pruning module and the attention mechanism module, it is verified that the use of pruning and attention mechanism can effectively improve the robustness of the medical image DNNs model.

CONCLUSIONS

Compared with the existing model-based defense methods proposed for natural images, our defense method is more suitable for medical images. Our method can be a general strategy to approach the design of more explainable and secure medical deep learning systems, and can be widely used in various medical image tasks to improve the robustness of medical models.

摘要

目的

深度学习在各种任务中都取得了令人瞩目的表现,包括医学图像处理。然而,最近的研究表明,深度神经网络(DNN)容易受到图像中小的对抗性扰动的影响,这引发了人们对这些系统在临床环境中部署的安全性的担忧。

方法

为了提高医学成像系统对对抗样本的防御能力,我们提出了一种新的基于模型的防御框架,用于装备有剪枝和注意力机制模块的医学图像 DNN 模型。该框架的提出是基于对现有医学图像 DNN 模型易受对抗样本攻击的原因的分析,即医学成像的复杂生物纹理和过参数化的医学图像 DNN 模型。

结果

三个基准医学图像数据集验证了我们的方法在提高医学图像 DNN 模型鲁棒性方面的有效性。在胸部 X 射线数据集上,我们的防御方法甚至可以针对投影梯度下降攻击达到 77.18%的防御率,针对 DeepFool 攻击达到 69.49%的防御率。通过对剪枝模块和注意力机制模块的消融实验,验证了剪枝和注意力机制的使用可以有效地提高医学图像 DNN 模型的鲁棒性。

结论

与针对自然图像提出的现有基于模型的防御方法相比,我们的防御方法更适用于医学图像。我们的方法可以作为一种通用策略,用于设计更具可解释性和安全性的医学深度学习系统,并可以广泛应用于各种医学图像任务,以提高医学模型的鲁棒性。

相似文献

1
Enhancing adversarial defense for medical image analysis systems with pruning and attention mechanism.利用剪枝和注意力机制增强医学图像分析系统的对抗防御能力。
Med Phys. 2021 Oct;48(10):6198-6212. doi: 10.1002/mp.15208. Epub 2021 Sep 14.
2
Learning defense transformations for counterattacking adversarial examples.学习防御变换以反击对抗样本。
Neural Netw. 2023 Jul;164:177-185. doi: 10.1016/j.neunet.2023.03.008. Epub 2023 Mar 24.
3
A general approach to improve adversarial robustness of DNNs for medical image segmentation and detection.一种提高用于医学图像分割和检测的深度神经网络对抗鲁棒性的通用方法。
Proc SPIE Int Soc Opt Eng. 2024 Feb;12926. doi: 10.1117/12.3006534. Epub 2024 Apr 2.
4
Universal adversarial attacks on deep neural networks for medical image classification.针对医学图像分类的深度神经网络的通用对抗攻击。
BMC Med Imaging. 2021 Jan 7;21(1):9. doi: 10.1186/s12880-020-00530-y.
5
Interpreting and Improving Adversarial Robustness of Deep Neural Networks With Neuron Sensitivity.基于神经元敏感性的深度神经网络对抗鲁棒性解释与改进。
IEEE Trans Image Process. 2021;30:1291-1304. doi: 10.1109/TIP.2020.3042083. Epub 2020 Dec 23.
6
Progressive Diversified Augmentation for General Robustness of DNNs: A Unified Approach.渐进多样化增强:提高 DNN 泛化鲁棒性的统一方法
IEEE Trans Image Process. 2021;30:8955-8967. doi: 10.1109/TIP.2021.3121150. Epub 2021 Oct 29.
7
Improving Adversarial Robustness via Attention and Adversarial Logit Pairing.通过注意力机制和对抗性逻辑对配对提高对抗鲁棒性
Front Artif Intell. 2022 Jan 27;4:752831. doi: 10.3389/frai.2021.752831. eCollection 2021.
8
Feature Distillation in Deep Attention Network Against Adversarial Examples.深度注意力网络中针对对抗样本的特征蒸馏
IEEE Trans Neural Netw Learn Syst. 2023 Jul;34(7):3691-3705. doi: 10.1109/TNNLS.2021.3113342. Epub 2023 Jul 6.
9
Uni-image: Universal image construction for robust neural model.Uni-image:用于稳健神经模型的通用图像构建。
Neural Netw. 2020 Aug;128:279-287. doi: 10.1016/j.neunet.2020.05.018. Epub 2020 May 21.
10
Towards evaluating the robustness of deep diagnostic models by adversarial attack.通过对抗攻击评估深度诊断模型的稳健性。
Med Image Anal. 2021 Apr;69:101977. doi: 10.1016/j.media.2021.101977. Epub 2021 Jan 22.

引用本文的文献

1
Auto encoder-based defense mechanism against popular adversarial attacks in deep learning.基于自动编码器的深度学习中流行对抗攻击防御机制。
PLoS One. 2024 Oct 21;19(10):e0307363. doi: 10.1371/journal.pone.0307363. eCollection 2024.
2
How Does Pruning Impact Long-Tailed Multi-label Medical Image Classifiers?剪枝如何影响长尾多标签医学图像分类器?
Med Image Comput Comput Assist Interv. 2023 Oct;14224:663-673. doi: 10.1007/978-3-031-43904-9_64. Epub 2023 Oct 1.