Wu Shu, Chen Jindou, Nie Xueli, Wang Yong, Zhou Xiancun, Lu Linlin, Peng Wei, Nie Yao, Menhaj Waseef
School of Electronic and Information Engineering, West Anhui University, Lu'an, 237012, China.
School of Physics and Electronic Information, Anhui Normal University, Wuhu, 241002, China.
Sci Rep. 2024 May 27;14(1):12057. doi: 10.1038/s41598-024-62908-0.
Federated learning is a distributed machine learning paradigm where the goal is to collaboratively train a high quality global model while private training data remains local over distributed clients. However, heterogenous data distribution over clients is severely challenging for federated learning system, which severely damage the quality of model. In order to address this challenge, we propose global prototype distillation (FedGPD) for heterogenous federated learning to improve performance of global model. The intuition is to use global class prototypes as knowledge to instruct local training on client side. Eventually, local objectives will be consistent with the global optima so that FedGPD learns an improved global model. Experiments show that FedGPD outperforms previous state-of-art methods by 0.22% ~1.28% in terms of average accuracy on representative benchmark datasets.
联邦学习是一种分布式机器学习范式,其目标是协作训练一个高质量的全局模型,同时私有训练数据在分布式客户端上保持本地化。然而,客户端上异构的数据分布对联邦学习系统来说是极具挑战性的,这严重损害了模型的质量。为了应对这一挑战,我们提出了用于异构联邦学习的全局原型蒸馏(FedGPD)方法,以提高全局模型的性能。其直观想法是使用全局类原型作为知识来指导客户端的本地训练。最终,局部目标将与全局最优解一致,从而使FedGPD学习到一个改进的全局模型。实验表明,在代表性基准数据集上,FedGPD在平均准确率方面比先前的最优方法高出0.22%至1.28%。