• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

联邦生成对抗网络的后门攻击与防御在医学图像合成中的应用。

Backdoor attack and defense in federated generative adversarial network-based medical image synthesis.

机构信息

Computer Science Department, The University of British Columbia, BC, V6T 1Z4, Canada.

Electrical and Computer Engineering Department, The University of British Columbia, BC, V6T 1Z4, Canada.

出版信息

Med Image Anal. 2023 Dec;90:102965. doi: 10.1016/j.media.2023.102965. Epub 2023 Sep 22.

DOI:10.1016/j.media.2023.102965
PMID:37804585
Abstract

Deep Learning-based image synthesis techniques have been applied in healthcare research for generating medical images to support open research and augment medical datasets. Training generative adversarial neural networks (GANs) usually require large amounts of training data. Federated learning (FL) provides a way of training a central model using distributed data while keeping raw data locally. However, given that the FL server cannot access the raw data, it is vulnerable to backdoor attacks, an adversarial by poisoning training data. Most backdoor attack strategies focus on classification models and centralized domains. It is still an open question if the existing backdoor attacks can affect GAN training and, if so, how to defend against the attack in the FL setting. In this work, we investigate the overlooked issue of backdoor attacks in federated GANs (FedGANs). The success of this attack is subsequently determined to be the result of some local discriminators overfitting the poisoned data and corrupting the local GAN equilibrium, which then further contaminates other clients when averaging the generator's parameters and yields high generator loss. Therefore, we proposed FedDetect, an efficient and effective way of defending against the backdoor attack in the FL setting, which allows the server to detect the client's adversarial behavior based on their losses and block the malicious clients. Our extensive experiments on two medical datasets with different modalities demonstrate the backdoor attack on FedGANs can result in synthetic images with low fidelity. After detecting and suppressing the detected malicious clients using the proposed defense strategy, we show that FedGANs can synthesize high-quality medical datasets (with labels) for data augmentation to improve classification models' performance.

摘要

基于深度学习的图像合成技术已应用于医疗保健研究中,用于生成医学图像以支持开放研究和扩充医学数据集。训练生成对抗网络(GAN)通常需要大量的训练数据。联邦学习(FL)提供了一种使用分布式数据训练中央模型的方法,同时保持本地原始数据。然而,由于 FL 服务器无法访问原始数据,因此它容易受到后门攻击的影响,即通过污染训练数据来进行对抗攻击。大多数后门攻击策略都集中在分类模型和集中式领域。现有的后门攻击是否会影响联邦 GAN(FedGAN)的训练,以及如果会,如何在联邦学习环境中防御攻击,这仍然是一个悬而未决的问题。在这项工作中,我们研究了在联邦 GAN 中被忽视的后门攻击问题。随后确定该攻击的成功是由于一些本地鉴别器过度拟合中毒数据并破坏本地 GAN 平衡的结果,这会在平均生成器参数时进一步污染其他客户端,并导致生成器损失较高。因此,我们提出了 FedDetect,这是一种在联邦学习环境中防御后门攻击的有效方法,它允许服务器根据客户端的损失检测其对抗行为,并阻止恶意客户端。我们在具有不同模态的两个医学数据集上进行了广泛的实验,证明了 FedGAN 上的后门攻击会导致低保真度的合成图像。在使用所提出的防御策略检测和抑制检测到的恶意客户端后,我们表明 FedGAN 可以合成高质量的医学数据集(带标签)以进行数据扩充,从而提高分类模型的性能。

相似文献

1
Backdoor attack and defense in federated generative adversarial network-based medical image synthesis.联邦生成对抗网络的后门攻击与防御在医学图像合成中的应用。
Med Image Anal. 2023 Dec;90:102965. doi: 10.1016/j.media.2023.102965. Epub 2023 Sep 22.
2
Federated Learning Backdoor Attack Based on Frequency Domain Injection.基于频域注入的联邦学习后门攻击
Entropy (Basel). 2024 Feb 14;26(2):164. doi: 10.3390/e26020164.
3
Towards Unified Robustness Against Both Backdoor and Adversarial Attacks.迈向抵御后门攻击和对抗性攻击的统一鲁棒性。
IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):7589-7605. doi: 10.1109/TPAMI.2024.3392760. Epub 2024 Nov 6.
4
Poison Ink: Robust and Invisible Backdoor Attack.毒墨:稳健且不可见的后门攻击
IEEE Trans Image Process. 2022;31:5691-5705. doi: 10.1109/TIP.2022.3201472. Epub 2022 Sep 2.
5
How to backdoor split learning.后门分裂学习。
Neural Netw. 2023 Nov;168:326-336. doi: 10.1016/j.neunet.2023.09.037. Epub 2023 Sep 24.
6
Detection of Backdoors in Trained Classifiers Without Access to the Training Set.在无法访问训练集的情况下检测训练分类器中的后门。
IEEE Trans Neural Netw Learn Syst. 2022 Mar;33(3):1177-1191. doi: 10.1109/TNNLS.2020.3041202. Epub 2022 Feb 28.
7
Backdoor Attack against Face Sketch Synthesis.针对面部草图合成的后门攻击。
Entropy (Basel). 2023 Jun 25;25(7):974. doi: 10.3390/e25070974.
8
Edge-Cloud Collaborative Defense against Backdoor Attacks in Federated Learning.边缘-云协同防御联邦学习中的后门攻击。
Sensors (Basel). 2023 Jan 17;23(3):1052. doi: 10.3390/s23031052.
9
IFL-GAN: Improved Federated Learning Generative Adversarial Network With Maximum Mean Discrepancy Model Aggregation.IFL-GAN:基于最大均值差异模型聚合的改进型联邦学习生成对抗网络
IEEE Trans Neural Netw Learn Syst. 2023 Dec;34(12):10502-10515. doi: 10.1109/TNNLS.2022.3167482. Epub 2023 Nov 30.
10
Generative Adversarial Networks in Medical Image Processing.生成对抗网络在医学图像处理中的应用。
Curr Pharm Des. 2021;27(15):1856-1868. doi: 10.2174/1381612826666201125110710.

引用本文的文献

1
Self-supervised identification and elimination of harmful datasets in distributed machine learning for medical image analysis.用于医学图像分析的分布式机器学习中有害数据集的自监督识别与消除
NPJ Digit Med. 2025 Feb 15;8(1):104. doi: 10.1038/s41746-025-01499-0.
2
Using deep learning to shorten the acquisition time of brain MRI in acute ischemic stroke: Synthetic T2W images generated from b0 images.利用深度学习缩短急性缺血性卒中脑磁共振成像的采集时间:从b0图像生成的合成T2加权图像。
PLoS One. 2025 Jan 6;20(1):e0316642. doi: 10.1371/journal.pone.0316642. eCollection 2025.
3
EXPLORING BACKDOOR ATTACKS IN OFF-THE-SHELF UNSUPERVISED DOMAIN ADAPTATION FOR SECURING CARDIAC MRI-BASED DIAGNOSIS.
探索现成的无监督域适应中的后门攻击以保障基于心脏磁共振成像的诊断
Proc IEEE Int Symp Biomed Imaging. 2024 May;2024. doi: 10.1109/isbi56570.2024.10635403. Epub 2024 Aug 22.
4
Engineering and application of LacI mutants with stringent expressions.具有严格表达的乳糖抑制蛋白(LacI)突变体的工程与应用
Microb Biotechnol. 2024 Mar;17(3):e14427. doi: 10.1111/1751-7915.14427.