• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

FedPD:抵御后门攻击的联邦原型学习

FedPD: Defending federated prototype learning against backdoor attacks.

作者信息

Tan Zhou, Cai Jianping, Li De, Lian Puwei, Liu Ximeng, Che Yan

机构信息

College of Computer Science and Big Data, Fuzhou University, Fuzhou, 350000, China.

School of Computer Science and Engineering, Guangxi Normal University, Guilin, 541004, China.

出版信息

Neural Netw. 2025 Apr;184:107016. doi: 10.1016/j.neunet.2024.107016. Epub 2024 Dec 10.

DOI:10.1016/j.neunet.2024.107016
PMID:39708704
Abstract

Federated Learning (FL) is an efficient, distributed machine learning paradigm that enables multiple clients to jointly train high-performance deep learning models while maintaining training data locally. However, due to its distributed computing nature, malicious clients can manipulate the prediction of the trained model through backdoor attacks. Existing defense methods require significant computational and communication overhead during the training or testing phases, limiting their practicality in resource-constrained scenarios and being unsuitable for the Non-IID data distribution typical in general FL scenarios. To address these challenges, we propose the FedPD framework, in which servers and clients exchange prototypes rather than model parameters, preventing the implantation of backdoor channels by malicious clients during FL training and effectively eliminating the success of backdoor attacks at the source, significantly reducing communication overhead. Additionally, prototypes can serve as global knowledge to correct clients' local training. Experiments and performance analysis show that FedPD achieves superior and consistent defense performance compared to existing representative approaches against backdoor attacks. In specific scenarios, FedPD can reduce the success rate of attacks by 90.73% compared to FedAvg without defense while maintaining the main task accuracy above 90%.

摘要

联邦学习(FL)是一种高效的分布式机器学习范式,它使多个客户端能够在本地保留训练数据的同时联合训练高性能深度学习模型。然而,由于其分布式计算的特性,恶意客户端可以通过后门攻击操纵训练模型的预测。现有的防御方法在训练或测试阶段需要大量的计算和通信开销,这限制了它们在资源受限场景中的实用性,并且不适用于一般联邦学习场景中典型的非独立同分布数据分布。为应对这些挑战,我们提出了FedPD框架,其中服务器和客户端交换原型而非模型参数,防止恶意客户端在联邦学习训练期间植入后门通道,并从源头上有效消除后门攻击的成功性,显著减少通信开销。此外,原型可以作为全局知识来纠正客户端的本地训练。实验和性能分析表明,与现有的针对后门攻击的代表性方法相比,FedPD实现了卓越且一致的防御性能。在特定场景中,与无防御的联邦平均算法(FedAvg)相比,FedPD可以将攻击成功率降低90.73%,同时将主要任务的准确率保持在90%以上。

相似文献

1
FedPD: Defending federated prototype learning against backdoor attacks.FedPD:抵御后门攻击的联邦原型学习
Neural Netw. 2025 Apr;184:107016. doi: 10.1016/j.neunet.2024.107016. Epub 2024 Dec 10.
2
Backdoor attack and defense in federated generative adversarial network-based medical image synthesis.联邦生成对抗网络的后门攻击与防御在医学图像合成中的应用。
Med Image Anal. 2023 Dec;90:102965. doi: 10.1016/j.media.2023.102965. Epub 2023 Sep 22.
3
Federated Learning Backdoor Attack Based on Frequency Domain Injection.基于频域注入的联邦学习后门攻击
Entropy (Basel). 2024 Feb 14;26(2):164. doi: 10.3390/e26020164.
4
Edge-Cloud Collaborative Defense against Backdoor Attacks in Federated Learning.边缘-云协同防御联邦学习中的后门攻击。
Sensors (Basel). 2023 Jan 17;23(3):1052. doi: 10.3390/s23031052.
5
Federated learning with bilateral defense via blockchain.基于区块链的双边防御联邦学习。
Neural Netw. 2025 May;185:107199. doi: 10.1016/j.neunet.2025.107199. Epub 2025 Jan 27.
6
Distributed Detection of Malicious Android Apps While Preserving Privacy Using Federated Learning.基于联邦学习的恶意安卓应用隐私保护分布式检测。
Sensors (Basel). 2023 Feb 15;23(4):2198. doi: 10.3390/s23042198.
7
Federated influencer learning for secure and efficient collaborative learning in realistic medical database environment.联邦式影响者学习在现实医疗数据库环境中的安全高效协同学习。
Sci Rep. 2024 Sep 30;14(1):22729. doi: 10.1038/s41598-024-73863-1.
8
Minimal data poisoning attack in federated learning for medical image classification: An attacker perspective.医学图像分类联邦学习中的最小数据中毒攻击:攻击者视角
Artif Intell Med. 2025 Jan;159:103024. doi: 10.1016/j.artmed.2024.103024. Epub 2024 Nov 26.
9
Byzantine-robust federated learning via credibility assessment on non-IID data.基于非独立同分布数据可信度评估的拜占庭稳健联邦学习。
Math Biosci Eng. 2022 Jan;19(2):1659-1676. doi: 10.3934/mbe.2022078. Epub 2021 Dec 14.
10
Federated Learning Framework for Brain Tumor Detection Using MRI Images in Non-IID Data Distributions.用于在非独立同分布数据分布中使用MRI图像进行脑肿瘤检测的联邦学习框架。
J Imaging Inform Med. 2025 Mar 24. doi: 10.1007/s10278-025-01484-9.