Suppr超能文献

评估并增强视觉Transformer在医学成像中抵御对抗攻击的鲁棒性。

Evaluating and enhancing the robustness of vision transformers against adversarial attacks in medical imaging.

作者信息

Kanca Elif, Ayas Selen, Baykal Kablan Elif, Ekinci Murat

机构信息

Department of Software Engineering, Karadeniz Technical University, Trabzon, Turkey.

Department of Computer Engineering, Karadeniz Technical University, Trabzon, Turkey.

出版信息

Med Biol Eng Comput. 2025 Mar;63(3):673-690. doi: 10.1007/s11517-024-03226-5. Epub 2024 Oct 25.

Abstract

Deep neural networks (DNNs) have demonstrated exceptional performance in medical image analysis. However, recent studies have uncovered significant vulnerabilities in DNN models, particularly their susceptibility to adversarial attacks that manipulate these models into making inaccurate predictions. Vision Transformers (ViTs), despite their advanced capabilities in medical imaging tasks, have not been thoroughly evaluated for their robustness against such attacks in this domain. This study addresses this research gap by conducting an extensive analysis of various adversarial attacks on ViTs specifically within medical imaging contexts. We explore adversarial training as a potential defense mechanism and assess the resilience of ViT models against state-of-the-art adversarial attacks and defense strategies using publicly available benchmark medical image datasets. Our findings reveal that ViTs are vulnerable to adversarial attacks even with minimal perturbations, although adversarial training significantly enhances their robustness, achieving over 80% classification accuracy. Additionally, we perform a comparative analysis with state-of-the-art convolutional neural network models, highlighting the unique strengths and weaknesses of ViTs in handling adversarial threats. This research advances the understanding of ViTs robustness in medical imaging and provides insights into their practical deployment in real-world scenarios.

摘要

深度神经网络(DNN)在医学图像分析中展现出了卓越的性能。然而,最近的研究发现DNN模型存在重大漏洞,尤其是它们容易受到对抗攻击,这些攻击会操纵模型做出不准确的预测。视觉Transformer(ViT)尽管在医学成像任务中具有先进的能力,但在该领域针对此类攻击的鲁棒性尚未得到充分评估。本研究通过对医学成像背景下针对ViT的各种对抗攻击进行广泛分析,填补了这一研究空白。我们探索对抗训练作为一种潜在的防御机制,并使用公开可用的基准医学图像数据集评估ViT模型对最先进的对抗攻击和防御策略的弹性。我们的研究结果表明,即使是最小的扰动,ViT也容易受到对抗攻击,尽管对抗训练显著提高了它们的鲁棒性,分类准确率达到了80%以上。此外,我们与最先进的卷积神经网络模型进行了对比分析,突出了ViT在处理对抗威胁方面的独特优势和劣势。这项研究推进了对ViT在医学成像中鲁棒性的理解,并为其在实际场景中的实际部署提供了见解。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验