Suppr超能文献

针对水平和垂直数据分区中的模型中毒攻击的安全高效联邦学习

Secure and Efficient Federated Learning Against Model Poisoning Attacks in Horizontal and Vertical Data Partitioning.

作者信息

Yu Chong, Meng Zhenyu, Zhang Wenmiao, Lei Lei, Ni Jianbing, Zhang Kuan, Zhao Hai

出版信息

IEEE Trans Neural Netw Learn Syst. 2025 Jun;36(6):10913-10927. doi: 10.1109/TNNLS.2024.3486028.

Abstract

In distributed systems, data may partially overlap in sample and feature spaces, that is, horizontal and vertical data partitioning. By combining horizontal and vertical federated learning (FL), hybrid FL emerges as a promising solution to simultaneously deal with data overlapping in both sample and feature spaces. Due to its decentralized nature, hybrid FL is vulnerable to model poisoning attacks, where malicious devices corrupt the global model by sending crafted model updates to the server. Existing work usually analyzes the statistical characteristics of all updates to resist model poisoning attacks. However, training local models in hybrid FL requires additional communication and computation steps, increasing the detection cost. In addition, due to data diversity in hybrid FL, solutions based on the assumption that malicious models are distinct from honest models may incorrectly classify honest ones as malicious, resulting in low accuracy. To this end, we propose a secure and efficient hybrid FL against model poisoning attacks. Specifically, we first identify two attacks to define how attackers manipulate local models in a harmful yet covert way. Then, we analyze the execution time and energy consumption in hybrid FL. Based on the analysis, we formulate an optimization problem to minimize training costs while guaranteeing accuracy considering the effect of attacks. To solve the formulated problem, we transform it into a Markov decision process and model it as a multiagent reinforcement learning (MARL) problem. Then, we propose a malicious device detection (MDD) method based on MARL to select honest devices to participate in training and improve efficiency. In addition, we propose an alternative poisoned model detection (PMD) method considering model change consistency. This method aims to prevent poisoned models from being used in the model aggregation. Experimental results validate that under the random local model poisoning attack, the proposed MDD method can save over 50% training costs while guaranteeing accuracy. When facing the advanced adaptive local model poisoning (ALMP) attack, utilizing both the proposed MDD and PMD methods achieves the desired accuracy while reducing execution time and energy consumption.

摘要

在分布式系统中,数据在样本空间和特征空间中可能会部分重叠,即水平和垂直数据分区。通过结合水平联邦学习(FL)和垂直联邦学习,混合联邦学习应运而生,成为同时处理样本空间和特征空间中数据重叠问题的一种有前景的解决方案。由于其去中心化的特性,混合联邦学习容易受到模型中毒攻击,恶意设备通过向服务器发送精心设计的模型更新来破坏全局模型。现有工作通常分析所有更新的统计特征以抵御模型中毒攻击。然而,在混合联邦学习中训练局部模型需要额外的通信和计算步骤,增加了检测成本。此外,由于混合联邦学习中的数据多样性,基于恶意模型与诚实模型不同这一假设的解决方案可能会将诚实模型错误地分类为恶意模型,导致准确率较低。为此,我们提出了一种针对模型中毒攻击的安全高效的混合联邦学习方法。具体而言,我们首先识别两种攻击方式,以定义攻击者如何以有害但隐蔽的方式操纵局部模型。然后,我们分析混合联邦学习中的执行时间和能量消耗。基于该分析,我们制定了一个优化问题,在考虑攻击影响的情况下,在保证准确率的同时最小化训练成本。为了解决所制定的问题,我们将其转化为马尔可夫决策过程,并将其建模为多智能体强化学习(MARL)问题。然后,我们提出一种基于MARL的恶意设备检测(MDD)方法,以选择诚实设备参与训练并提高效率。此外,我们提出了一种考虑模型变化一致性的替代中毒模型检测(PMD)方法。该方法旨在防止中毒模型用于模型聚合。实验结果验证了在随机局部模型中毒攻击下,所提出的MDD方法在保证准确率的同时可以节省超过50%的训练成本。当面对先进的自适应局部模型中毒(ALMP)攻击时,同时使用所提出的MDD和PMD方法可以在降低执行时间和能量消耗的同时达到期望的准确率。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验