Li Yanan, Yang Shusen, Ren Xuebin, Shi Liang, Zhao Cong
IEEE Trans Pattern Anal Mach Intell. 2024 Feb;46(2):1243-1256. doi: 10.1109/TPAMI.2023.3332428. Epub 2024 Jan 8.
The fusion of federated learning and differential privacy can provide more comprehensive and rigorous privacy protection, thus attracting extensive interests from both academia and industry. However, facing the system-level challenge of device heterogeneity, most current synchronous FL paradigms exhibit low efficiency due to the straggler effect, which can be significantly reduced by Asynchronous FL (AFL). However, AFL has never been comprehensively studied, which imposes a major challenge in the utility optimization of DP-enhanced AFL. Here, theoretically motivated multi-stage adaptive private algorithms are proposed to improve the trade-off between model utility and privacy for DP-enhanced AFL. In particular, we first build two DP-enhanced AFL frameworks with consideration of universal factors for different adversary models. Then, we give a solid analysis on the model convergence of AFL, based on which, DP can be adaptively achieved with high utility. Through extensive experiments on different training models and benchmark datasets, we demonstrate that the proposed algorithms achieve the overall best performances and improve up to 24% test accuracy with the same privacy loss and have faster convergence compared with the state-of-the-art algorithms. Our frameworks provide an analytical way for private AFL and adapt to more complex FL application scenarios.
联邦学习与差分隐私的融合能够提供更全面、严格的隐私保护,因此吸引了学术界和工业界的广泛关注。然而,面对设备异构性这一系统层面的挑战,当前大多数同步联邦学习范式由于掉队者效应而效率低下,而异步联邦学习(AFL)可以显著降低这种效应。然而,AFL从未得到全面研究,这给差分隐私增强的AFL的效用优化带来了重大挑战。在此,提出了基于理论动机的多阶段自适应隐私算法,以改善差分隐私增强的AFL在模型效用和隐私之间的权衡。具体而言,我们首先考虑不同敌手模型的通用因素构建了两个差分隐私增强的AFL框架。然后,我们对AFL的模型收敛性进行了深入分析,在此基础上,可以以高效用自适应地实现差分隐私。通过在不同训练模型和基准数据集上的大量实验,我们证明了所提出的算法实现了总体最佳性能,在相同的隐私损失下测试准确率提高了24%,并且与现有算法相比收敛速度更快。我们的框架为隐私保护的AFL提供了一种分析方法,并适用于更复杂的联邦学习应用场景。