Luo Jun, Wu Shandong
Intelligent Systems Program, University of Pittsburgh.
Department of Radiology, University of Pittsburgh.
IJCAI (U S). 2022 Jul;2022:2166-2173. doi: 10.24963/ijcai.2022/301.
Conventional federated learning (FL) trains one global model for a federation of clients with decentralized data, reducing the privacy risk of centralized training. However, the distribution shift across non-IID datasets, often poses a challenge to this one-model-fits-all solution. Personalized FL aims to mitigate this issue systematically. In this work, we propose APPLE, a personalized cross-silo FL framework that adaptively learns how much each client can benefit from other clients' models. We also introduce a method to flexibly control the focus of training APPLE between global and local objectives. We empirically evaluate our method's convergence and generalization behaviors, and perform extensive experiments on two benchmark datasets and two medical imaging datasets under two non-IID settings. The results show that the proposed personalized FL framework, APPLE, achieves state-of-the-art performance compared to several other personalized FL approaches in the literature. The code is publicly available at https://github.com/ljaiverson/pFL-APPLE.
传统联邦学习(FL)为拥有分散数据的客户端联盟训练一个全局模型,降低了集中式训练的隐私风险。然而,非独立同分布(non-IID)数据集之间的分布偏移,常常给这种一刀切的解决方案带来挑战。个性化联邦学习旨在系统地缓解这一问题。在这项工作中,我们提出了APPLE,这是一个个性化的跨边缘设备联邦学习框架,它能自适应地学习每个客户端能从其他客户端模型中受益多少。我们还引入了一种方法,以灵活控制APPLE在全局目标和局部目标之间的训练重点。我们通过实验评估了我们方法的收敛性和泛化行为,并在两种非IID设置下,在两个基准数据集和两个医学成像数据集上进行了广泛实验。结果表明,与文献中的其他几种个性化联邦学习方法相比,所提出的个性化联邦学习框架APPLE取得了最优性能。代码可在https://github.com/ljaiverson/pFL-APPLE上公开获取。