• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

具有分层特征隐藏的对抗性医学图像

Adversarial Medical Image with Hierarchical Feature Hiding.

作者信息

Yao Qingsong, He Zecheng, Li Yuexiang, Lin Yi, Ma Kai, Zheng Yefeng, Kevin Zhou S

出版信息

IEEE Trans Med Imaging. 2023 Nov 23;PP. doi: 10.1109/TMI.2023.3335098.

DOI:10.1109/TMI.2023.3335098
PMID:37995172
Abstract

Deep learning based methods for medical images can be easily compromised by adversarial examples (AEs), posing a great security flaw in clinical decision-making. It has been discovered that conventional adversarial attacks like PGD which optimize the classification logits, are easy to distinguish in the feature space, resulting in accurate reactive defenses. To better understand this phenomenon and reassess the reliability of the reactive defenses for medical AEs, we thoroughly investigate the characteristic of conventional medical AEs. Specifically, we first theoretically prove that conventional adversarial attacks change the outputs by continuously optimizing vulnerable features in a fixed direction, thereby leading to outlier representations in the feature space. Then, a stress test is conducted to reveal the vulnerability of medical images, by comparing with natural images. Interestingly, this vulnerability is a double-edged sword, which can be exploited to hide AEs. We then propose a simple-yet-effective hierarchical feature constraint (HFC), a novel add-on to conventional white-box attacks, which assists to hide the adversarial feature in the target feature distribution. The proposed method is evaluated on three medical datasets, both 2D and 3D, with different modalities. The experimental results demonstrate the superiority of HFC, i.e., it bypasses an array of state-of-the-art adversarial medical AE detectors more efficiently than competing adaptive attacks, which reveals the deficiencies of medical reactive defense and allows to develop more robust defenses in future.

摘要

基于深度学习的医学图像方法很容易受到对抗样本(AE)的影响,这在临床决策中构成了巨大的安全漏洞。人们发现,像PGD这样优化分类对数的传统对抗攻击在特征空间中很容易被区分,从而产生准确的反应式防御。为了更好地理解这一现象并重新评估针对医学对抗样本的反应式防御的可靠性,我们深入研究了传统医学对抗样本的特征。具体来说,我们首先从理论上证明,传统对抗攻击通过在固定方向上持续优化易受攻击的特征来改变输出,从而在特征空间中导致异常表示。然后,通过与自然图像进行比较,进行压力测试以揭示医学图像的脆弱性。有趣的是,这种脆弱性是一把双刃剑,可被用来隐藏对抗样本。接着,我们提出了一种简单而有效的分层特征约束(HFC),这是对传统白盒攻击的一种新颖补充,有助于将对抗特征隐藏在目标特征分布中。我们在三个具有不同模态的二维和三维医学数据集上对所提出的方法进行了评估。实验结果证明了HFC的优越性,即它比竞争性的自适应攻击更有效地绕过了一系列先进的对抗医学对抗样本检测器,这揭示了医学反应式防御的不足,并有助于在未来开发更强大的防御措施。

相似文献

1
Adversarial Medical Image with Hierarchical Feature Hiding.具有分层特征隐藏的对抗性医学图像
IEEE Trans Med Imaging. 2023 Nov 23;PP. doi: 10.1109/TMI.2023.3335098.
2
Universal adversarial attacks on deep neural networks for medical image classification.针对医学图像分类的深度神经网络的通用对抗攻击。
BMC Med Imaging. 2021 Jan 7;21(1):9. doi: 10.1186/s12880-020-00530-y.
3
Robust Medical Diagnosis: A Novel Two-Phase Deep Learning Framework for Adversarial Proof Disease Detection in Radiology Images.稳健医学诊断:一种新颖的两阶段深度学习框架,用于放射图像中的对抗性证明疾病检测。
J Imaging Inform Med. 2024 Feb;37(1):308-338. doi: 10.1007/s10278-023-00916-8. Epub 2024 Jan 10.
4
DEFEAT: Decoupled feature attack across deep neural networks.击败:跨深度神经网络的解耦特征攻击。
Neural Netw. 2022 Dec;156:13-28. doi: 10.1016/j.neunet.2022.09.009. Epub 2022 Sep 20.
5
Natural Images Allow Universal Adversarial Attacks on Medical Image Classification Using Deep Neural Networks with Transfer Learning.自然图像可对使用迁移学习的深度神经网络的医学图像分类进行通用对抗攻击。
J Imaging. 2022 Feb 4;8(2):38. doi: 10.3390/jimaging8020038.
6
Exploring Robust Features for Improving Adversarial Robustness.探索用于提高对抗鲁棒性的稳健特征。
IEEE Trans Cybern. 2024 Sep;54(9):5141-5151. doi: 10.1109/TCYB.2024.3380437. Epub 2024 Aug 26.
7
Beware the Black-Box: On the Robustness of Recent Defenses to Adversarial Examples.谨防黑箱:论近期针对对抗样本防御的稳健性
Entropy (Basel). 2021 Oct 18;23(10):1359. doi: 10.3390/e23101359.
8
A Feature Space-Restricted Attention Attack on Medical Deep Learning Systems.一种针对医学深度学习系统的特征空间受限注意力攻击。
IEEE Trans Cybern. 2023 Aug;53(8):5323-5335. doi: 10.1109/TCYB.2022.3209175. Epub 2023 Jul 18.
9
LAFIT: Efficient and Reliable Evaluation of Adversarial Defenses With Latent Features.
IEEE Trans Pattern Anal Mach Intell. 2024 Jan;46(1):354-369. doi: 10.1109/TPAMI.2023.3323698. Epub 2023 Dec 5.
10
Adversarial Attack and Defence through Adversarial Training and Feature Fusion for Diabetic Retinopathy Recognition.对抗训练和特征融合在糖尿病视网膜病变识别中的对抗攻击和防御。
Sensors (Basel). 2021 Jun 7;21(11):3922. doi: 10.3390/s21113922.

引用本文的文献

1
CBCT-to-CT synthesis using a hybrid U-Net diffusion model based on transformers and information bottleneck theory.基于变压器和信息瓶颈理论的混合U-Net扩散模型用于CBCT到CT的合成。
Sci Rep. 2025 Mar 28;15(1):10816. doi: 10.1038/s41598-025-92094-6.
2
DeepOptimalNet: optimized deep learning model for early diagnosis of pancreatic tumor classification in CT imaging.深度优化网络:用于CT成像中胰腺肿瘤分类早期诊断的优化深度学习模型。
Abdom Radiol (NY). 2025 Mar 6. doi: 10.1007/s00261-025-04860-9.