Ding Zhiying, Wang Wenshuo, Li Xu, Wang Xuan, Jeon Gwanggil, Zhao Jindong, Mu Chunxiao
School of Computer and Control Engineering, Yantai University, Yantai, 264005, China.
Department of Embedded Systems Engineering, Incheon National University, Incheon, 22012, Korea.
Sci Rep. 2024 Aug 31;14(1):20269. doi: 10.1038/s41598-024-70375-w.
Implicit poisoning in federated learning is a significant threat, with malicious nodes subtly altering gradient parameters each round, making detection difficult. This study investigates this problem, revealing that temporal analysis alone struggles to identify such covert attacks, which can bypass online methods like cosine similarity and clustering. Common detection methods rely on offline analysis, resulting in delayed responses. However, recalculating gradient updates reveals distinct characteristics of malicious clients. Based on this finding, we designed a privacy-preserving detection algorithm using trajectory anomaly detection. Singular values of matrices are used as features, and an improved Isolation Forest algorithm processes these to detect malicious behavior. Experiments on MNIST, FashionMNIST, and CIFAR-10 datasets show our method achieves 94.3% detection accuracy and a false positive rate below 1.2%, indicating its high accuracy and effectiveness in detecting implicit model poisoning attacks.
联邦学习中的隐式中毒是一个重大威胁,恶意节点会在每一轮中巧妙地改变梯度参数,使得检测变得困难。本研究对该问题进行了调查,发现仅靠时间分析难以识别这种隐蔽攻击,因为此类攻击可以绕过余弦相似度和聚类等在线方法。常见的检测方法依赖于离线分析,导致响应延迟。然而,重新计算梯度更新会揭示恶意客户端的独特特征。基于这一发现,我们设计了一种使用轨迹异常检测的隐私保护检测算法。矩阵的奇异值被用作特征,一种改进的孤立森林算法对这些特征进行处理以检测恶意行为。在MNIST、FashionMNIST和CIFAR-10数据集上的实验表明,我们的方法实现了94.3%的检测准确率,误报率低于1.2%,表明其在检测隐式模型中毒攻击方面具有很高的准确性和有效性。