Liu Jiao, Li Xinghua, Liu Ximeng, Zhang Haiyan, Miao Yinbin, Deng Robert H
IEEE Trans Neural Netw Learn Syst. 2025 May;36(5):9098-9111. doi: 10.1109/TNNLS.2024.3423397. Epub 2025 May 2.
Federated learning (FL) has become a popular mode of learning, allowing model training without the need to share data. Unfortunately, it remains vulnerable to privacy leakage and poisoning attacks, which compromise user data security and degrade model quality. Therefore, numerous privacy-preserving frameworks have been proposed, among which mask-based framework has certain advantages in terms of efficiency and functionality. However, it is more susceptible to poisoning attacks from malicious users, and current works lack practical means to detect such attacks within this framework. To overcome this challenge, we present DefendFL, an efficient, privacy-preserving, and poisoning-detectable mask-based FL scheme. We first leverage collinearity mask to protect users' gradient privacy. Then, cosine similarity is utilized to detect masked gradients to identify poisonous gradients. Meanwhile, a verification mechanism is designed to detect the mask, ensuring the mask's validity in aggregation and preventing poisoning attacks by intentionally changing the mask. Finally, we resist poisoning attacks by removing malicious gradients or lowering their weights in aggregation. Through security analysis and experimental evaluation, DefendFL can effectively detect and mitigate poisoning attacks while outperforming existing privacy-preserving detection works in efficiency.
联邦学习(FL)已成为一种流行的学习模式,允许在无需共享数据的情况下进行模型训练。不幸的是,它仍然容易受到隐私泄露和中毒攻击,这会损害用户数据安全并降低模型质量。因此,人们提出了许多隐私保护框架,其中基于掩码的框架在效率和功能方面具有一定优势。然而,它更容易受到恶意用户的中毒攻击,并且当前的研究缺乏在该框架内检测此类攻击的实用方法。为了克服这一挑战,我们提出了DefendFL,一种高效、隐私保护且可检测中毒的基于掩码的联邦学习方案。我们首先利用共线性掩码来保护用户的梯度隐私。然后,利用余弦相似度来检测掩码梯度以识别有毒梯度。同时,设计了一种验证机制来检测掩码,确保掩码在聚合中的有效性,并通过故意更改掩码来防止中毒攻击。最后,我们通过在聚合中去除恶意梯度或降低其权重来抵御中毒攻击。通过安全分析和实验评估,DefendFL可以有效地检测和缓解中毒攻击,同时在效率上优于现有的隐私保护检测方法。