• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于频率约束的深度神经网络对抗攻击在医学图像分类中的应用

Frequency constraint-based adversarial attack on deep neural networks for medical image classification.

作者信息

Chen Fang, Wang Jian, Liu Han, Kong Wentao, Zhao Zhe, Ma Longfei, Liao Hongen, Zhang Daoqiang

机构信息

Key Laboratory of Brain-Machine Intelligence Technology, Ministry of Education, Nanjing University of Aeronautics and Astronautics, Nanjing China; College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing China.

Key Laboratory of Brain-Machine Intelligence Technology, Ministry of Education, Nanjing University of Aeronautics and Astronautics, Nanjing China; College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing China.

出版信息

Comput Biol Med. 2023 Sep;164:107248. doi: 10.1016/j.compbiomed.2023.107248. Epub 2023 Jul 25.

DOI:10.1016/j.compbiomed.2023.107248
PMID:37515875
Abstract

The security of AI systems has gained significant attention in recent years, particularly in the medical diagnosis field. To develop a secure medical image classification system based on deep neural networks, it is crucial to design effective adversarial attacks that can embed hidden, malicious behaviors into the system. However, designing a unified attack method that can generate imperceptible attack samples with high content similarity and be applied to diverse medical image classification systems is challenging due to the diversity of medical imaging modalities and dimensionalities. Most existing attack methods are designed to attack natural image classification models, which inevitably corrupt the semantics of pixels by applying spatial perturbations. To address this issue, we propose a novel frequency constraint-based adversarial attack method capable of delivering attacks in various medical image classification tasks. Specially, our method introduces a frequency constraint to inject perturbation into high-frequency information while preserving low-frequency information to ensure content similarity. Our experiments include four public medical image datasets, including a 3D CT dataset, a 2D chest X-Ray image dataset, a 2D breast ultrasound dataset, and a 2D thyroid ultrasound dataset, which contain different imaging modalities and dimensionalities. The results demonstrate the superior performance of our model over other state-of-the-art adversarial attack methods for attacking medical image classification tasks on different imaging modalities and dimensionalities.

摘要

近年来,人工智能系统的安全性受到了广泛关注,尤其是在医学诊断领域。要基于深度神经网络开发一个安全的医学图像分类系统,设计有效的对抗攻击至关重要,这种攻击能够将隐藏的恶意行为嵌入到系统中。然而,由于医学成像模态和维度的多样性,设计一种统一的攻击方法,使其能够生成具有高内容相似度且不可察觉的攻击样本,并应用于各种医学图像分类系统,具有很大的挑战性。大多数现有的攻击方法旨在攻击自然图像分类模型,通过应用空间扰动不可避免地破坏了像素的语义。为了解决这个问题,我们提出了一种基于频率约束的新型对抗攻击方法,该方法能够在各种医学图像分类任务中进行攻击。具体来说,我们的方法引入了频率约束,在保留低频信息以确保内容相似度的同时,将扰动注入高频信息。我们的实验包括四个公共医学图像数据集,一个3D CT数据集、一个2D胸部X光图像数据集、一个2D乳腺超声数据集和一个2D甲状腺超声数据集,这些数据集包含不同的成像模态和维度。结果表明,在针对不同成像模态和维度的医学图像分类任务进行攻击时,我们的模型比其他现有先进对抗攻击方法具有更优越的性能。

相似文献

1
Frequency constraint-based adversarial attack on deep neural networks for medical image classification.基于频率约束的深度神经网络对抗攻击在医学图像分类中的应用
Comput Biol Med. 2023 Sep;164:107248. doi: 10.1016/j.compbiomed.2023.107248. Epub 2023 Jul 25.
2
Enhancing robustness in video recognition models: Sparse adversarial attacks and beyond.增强视频识别模型的鲁棒性:稀疏对抗攻击及其他。
Neural Netw. 2024 Mar;171:127-143. doi: 10.1016/j.neunet.2023.11.056. Epub 2023 Nov 25.
3
Uni-image: Universal image construction for robust neural model.Uni-image:用于稳健神经模型的通用图像构建。
Neural Netw. 2020 Aug;128:279-287. doi: 10.1016/j.neunet.2020.05.018. Epub 2020 May 21.
4
A Feature Space-Restricted Attention Attack on Medical Deep Learning Systems.一种针对医学深度学习系统的特征空间受限注意力攻击。
IEEE Trans Cybern. 2023 Aug;53(8):5323-5335. doi: 10.1109/TCYB.2022.3209175. Epub 2023 Jul 18.
5
Enhancing adversarial defense for medical image analysis systems with pruning and attention mechanism.利用剪枝和注意力机制增强医学图像分析系统的对抗防御能力。
Med Phys. 2021 Oct;48(10):6198-6212. doi: 10.1002/mp.15208. Epub 2021 Sep 14.
6
Backdoor attack and defense in federated generative adversarial network-based medical image synthesis.联邦生成对抗网络的后门攻击与防御在医学图像合成中的应用。
Med Image Anal. 2023 Dec;90:102965. doi: 10.1016/j.media.2023.102965. Epub 2023 Sep 22.
7
Universal adversarial attacks on deep neural networks for medical image classification.针对医学图像分类的深度神经网络的通用对抗攻击。
BMC Med Imaging. 2021 Jan 7;21(1):9. doi: 10.1186/s12880-020-00530-y.
8
GLH: From Global to Local Gradient Attacks with High-Frequency Momentum Guidance for Object Detection.GLH:用于目标检测的具有高频动量引导的从全局到局部梯度攻击
Entropy (Basel). 2023 Mar 6;25(3):461. doi: 10.3390/e25030461.
9
Boosting the transferability of adversarial examples via stochastic serial attack.通过随机串行攻击提升对抗样本的可转移性。
Neural Netw. 2022 Jun;150:58-67. doi: 10.1016/j.neunet.2022.02.025. Epub 2022 Mar 7.
10
Local imperceptible adversarial attacks against human pose estimation networks.针对人体姿态估计网络的局部不可察觉对抗攻击。
Vis Comput Ind Biomed Art. 2023 Nov 21;6(1):22. doi: 10.1186/s42492-023-00148-1.

引用本文的文献

1
Mobile applications for skin cancer detection are vulnerable to physical camera-based adversarial attacks.用于皮肤癌检测的移动应用程序容易受到基于物理摄像头的对抗性攻击。
Sci Rep. 2025 May 24;15(1):18119. doi: 10.1038/s41598-025-03546-y.