Cui Lixin, Wu Xu
College of Computers Science and Cyber Security, Chengdu University of Technology, Chengdu, 610059, China.
Sci Rep. 2025 Jul 22;15(1):26679. doi: 10.1038/s41598-025-12575-6.
Federated learning, as an emerging distributed learning framework, enables model training without compromising user data privacy. However, malicious attackers may still infer sensitive user information by analyzing model updates during the federated learning process. To address this, this paper proposes an Adaptive Localized Differential Privacy Federated Learning (ALDP-FL) method. This approach dynamically sets the clipping threshold for each network layer's updates based on the historical moving average of their [Formula: see text]-norm, thereby injecting adaptive noise into each layer. Additionally, a bounded perturbation mechanism is designed to minimize the impact of the added noise on model accuracy. A privacy analysis of the method is provided. Finally, experiments on the MNIST, Fashion MNIST, and CIFAR-10 datasets demonstrate the effectiveness and practicality of the proposed method. Specifically, ALDP-FL achieves an average improvement of over 10% across all evaluation metrics: Accuracy increases by 10.57%, Precision by 10.64%, Recall by 10.52%, and F1 Score by 10.64%. Regarding the reconstructed images under the iDLG attack, the average improvement rates in MSE and SSIM reach 391.2% and -85.4%, respectively, significantly outperforming all other comparison methods.
联邦学习作为一种新兴的分布式学习框架,能够在不损害用户数据隐私的情况下进行模型训练。然而,恶意攻击者仍可能通过分析联邦学习过程中的模型更新来推断敏感的用户信息。为解决这一问题,本文提出了一种自适应局部差分隐私联邦学习(ALDP-FL)方法。该方法基于各网络层更新的[公式:见原文]范数的历史移动平均值动态设置每层更新的裁剪阈值,从而在各层注入自适应噪声。此外,还设计了一种有界扰动机制,以尽量减少添加的噪声对模型准确性的影响。提供了该方法的隐私分析。最后,在MNIST、Fashion MNIST和CIFAR-10数据集上的实验证明了所提方法的有效性和实用性。具体而言,ALDP-FL在所有评估指标上平均提高了10%以上:准确率提高了10.57%,精确率提高了10.64%,召回率提高了10.52%,F1分数提高了10.64%。对于iDLG攻击下的重建图像,MSE和SSIM的平均改善率分别达到391.2%和-85.4%,显著优于所有其他比较方法。