• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

针对水平和垂直数据分区中的模型中毒攻击的安全高效联邦学习

Secure and Efficient Federated Learning Against Model Poisoning Attacks in Horizontal and Vertical Data Partitioning.

作者信息

Yu Chong, Meng Zhenyu, Zhang Wenmiao, Lei Lei, Ni Jianbing, Zhang Kuan, Zhao Hai

出版信息

IEEE Trans Neural Netw Learn Syst. 2025 Jun;36(6):10913-10927. doi: 10.1109/TNNLS.2024.3486028.

DOI:10.1109/TNNLS.2024.3486028
PMID:39499608
Abstract

In distributed systems, data may partially overlap in sample and feature spaces, that is, horizontal and vertical data partitioning. By combining horizontal and vertical federated learning (FL), hybrid FL emerges as a promising solution to simultaneously deal with data overlapping in both sample and feature spaces. Due to its decentralized nature, hybrid FL is vulnerable to model poisoning attacks, where malicious devices corrupt the global model by sending crafted model updates to the server. Existing work usually analyzes the statistical characteristics of all updates to resist model poisoning attacks. However, training local models in hybrid FL requires additional communication and computation steps, increasing the detection cost. In addition, due to data diversity in hybrid FL, solutions based on the assumption that malicious models are distinct from honest models may incorrectly classify honest ones as malicious, resulting in low accuracy. To this end, we propose a secure and efficient hybrid FL against model poisoning attacks. Specifically, we first identify two attacks to define how attackers manipulate local models in a harmful yet covert way. Then, we analyze the execution time and energy consumption in hybrid FL. Based on the analysis, we formulate an optimization problem to minimize training costs while guaranteeing accuracy considering the effect of attacks. To solve the formulated problem, we transform it into a Markov decision process and model it as a multiagent reinforcement learning (MARL) problem. Then, we propose a malicious device detection (MDD) method based on MARL to select honest devices to participate in training and improve efficiency. In addition, we propose an alternative poisoned model detection (PMD) method considering model change consistency. This method aims to prevent poisoned models from being used in the model aggregation. Experimental results validate that under the random local model poisoning attack, the proposed MDD method can save over 50% training costs while guaranteeing accuracy. When facing the advanced adaptive local model poisoning (ALMP) attack, utilizing both the proposed MDD and PMD methods achieves the desired accuracy while reducing execution time and energy consumption.

摘要

在分布式系统中,数据在样本空间和特征空间中可能会部分重叠,即水平和垂直数据分区。通过结合水平联邦学习(FL)和垂直联邦学习,混合联邦学习应运而生,成为同时处理样本空间和特征空间中数据重叠问题的一种有前景的解决方案。由于其去中心化的特性,混合联邦学习容易受到模型中毒攻击,恶意设备通过向服务器发送精心设计的模型更新来破坏全局模型。现有工作通常分析所有更新的统计特征以抵御模型中毒攻击。然而,在混合联邦学习中训练局部模型需要额外的通信和计算步骤,增加了检测成本。此外,由于混合联邦学习中的数据多样性,基于恶意模型与诚实模型不同这一假设的解决方案可能会将诚实模型错误地分类为恶意模型,导致准确率较低。为此,我们提出了一种针对模型中毒攻击的安全高效的混合联邦学习方法。具体而言,我们首先识别两种攻击方式,以定义攻击者如何以有害但隐蔽的方式操纵局部模型。然后,我们分析混合联邦学习中的执行时间和能量消耗。基于该分析,我们制定了一个优化问题,在考虑攻击影响的情况下,在保证准确率的同时最小化训练成本。为了解决所制定的问题,我们将其转化为马尔可夫决策过程,并将其建模为多智能体强化学习(MARL)问题。然后,我们提出一种基于MARL的恶意设备检测(MDD)方法,以选择诚实设备参与训练并提高效率。此外,我们提出了一种考虑模型变化一致性的替代中毒模型检测(PMD)方法。该方法旨在防止中毒模型用于模型聚合。实验结果验证了在随机局部模型中毒攻击下,所提出的MDD方法在保证准确率的同时可以节省超过50%的训练成本。当面对先进的自适应局部模型中毒(ALMP)攻击时,同时使用所提出的MDD和PMD方法可以在降低执行时间和能量消耗的同时达到期望的准确率。

相似文献

1
Secure and Efficient Federated Learning Against Model Poisoning Attacks in Horizontal and Vertical Data Partitioning.针对水平和垂直数据分区中的模型中毒攻击的安全高效联邦学习
IEEE Trans Neural Netw Learn Syst. 2025 Jun;36(6):10913-10927. doi: 10.1109/TNNLS.2024.3486028.
2
Fair detection of poisoning attacks in federated learning on non-i.i.d. data.在非独立同分布数据的联邦学习中对中毒攻击的公平检测。
Data Min Knowl Discov. 2023 Jan 4:1-26. doi: 10.1007/s10618-022-00912-6.
3
Enhanced Security and Privacy via Fragmented Federated Learning.通过碎片化联邦学习增强安全性和隐私性。
IEEE Trans Neural Netw Learn Syst. 2024 May;35(5):6703-6717. doi: 10.1109/TNNLS.2022.3212627. Epub 2024 May 2.
4
LFighter: Defending against the label-flipping attack in federated learning.LFighter:防御联邦学习中的标签翻转攻击。
Neural Netw. 2024 Feb;170:111-126. doi: 10.1016/j.neunet.2023.11.019. Epub 2023 Nov 11.
5
Minimal data poisoning attack in federated learning for medical image classification: An attacker perspective.医学图像分类联邦学习中的最小数据中毒攻击:攻击者视角
Artif Intell Med. 2025 Jan;159:103024. doi: 10.1016/j.artmed.2024.103024. Epub 2024 Nov 26.
6
Blockchain-Enabled Asynchronous Federated Learning in Edge Computing.区块链赋能的边缘计算中异步联邦学习。
Sensors (Basel). 2021 May 11;21(10):3335. doi: 10.3390/s21103335.
7
A Heterogeneity-Aware Semi-Decentralized Model for a Lightweight Intrusion Detection System for IoT Networks Based on Federated Learning and BiLSTM.基于联邦学习和双向长短期记忆网络的物联网网络轻量级入侵检测系统的异构感知半分散模型
Sensors (Basel). 2025 Feb 9;25(4):1039. doi: 10.3390/s25041039.
8
DefendFL: A Privacy-Preserving Federated Learning Scheme Against Poisoning Attacks.DefendFL:一种抵御中毒攻击的隐私保护联邦学习方案。
IEEE Trans Neural Netw Learn Syst. 2025 May;36(5):9098-9111. doi: 10.1109/TNNLS.2024.3423397. Epub 2025 May 2.
9
Backdoor attack and defense in federated generative adversarial network-based medical image synthesis.联邦生成对抗网络的后门攻击与防御在医学图像合成中的应用。
Med Image Anal. 2023 Dec;90:102965. doi: 10.1016/j.media.2023.102965. Epub 2023 Sep 22.
10
Evaluating Federated Learning Simulators: A Comparative Analysis of Horizontal and Vertical Approaches.评估联邦学习模拟器:水平方法与垂直方法的比较分析
Sensors (Basel). 2024 Aug 9;24(16):5149. doi: 10.3390/s24165149.

引用本文的文献

1
Applications and advances of multi-omics technologies in gastrointestinal tumors.多组学技术在胃肠道肿瘤中的应用与进展
Front Med (Lausanne). 2025 Jul 23;12:1630788. doi: 10.3389/fmed.2025.1630788. eCollection 2025.