Suppr超能文献

用于异构联邦学习的全局原型蒸馏

Global prototype distillation for heterogeneous federated learning.

作者信息

Wu Shu, Chen Jindou, Nie Xueli, Wang Yong, Zhou Xiancun, Lu Linlin, Peng Wei, Nie Yao, Menhaj Waseef

机构信息

School of Electronic and Information Engineering, West Anhui University, Lu'an, 237012, China.

School of Physics and Electronic Information, Anhui Normal University, Wuhu, 241002, China.

出版信息

Sci Rep. 2024 May 27;14(1):12057. doi: 10.1038/s41598-024-62908-0.

Abstract

Federated learning is a distributed machine learning paradigm where the goal is to collaboratively train a high quality global model while private training data remains local over distributed clients. However, heterogenous data distribution over clients is severely challenging for federated learning system, which severely damage the quality of model. In order to address this challenge, we propose global prototype distillation (FedGPD) for heterogenous federated learning to improve performance of global model. The intuition is to use global class prototypes as knowledge to instruct local training on client side. Eventually, local objectives will be consistent with the global optima so that FedGPD learns an improved global model. Experiments show that FedGPD outperforms previous state-of-art methods by 0.22% ~1.28% in terms of average accuracy on representative benchmark datasets.

摘要

联邦学习是一种分布式机器学习范式,其目标是协作训练一个高质量的全局模型,同时私有训练数据在分布式客户端上保持本地化。然而,客户端上异构的数据分布对联邦学习系统来说是极具挑战性的,这严重损害了模型的质量。为了应对这一挑战,我们提出了用于异构联邦学习的全局原型蒸馏(FedGPD)方法,以提高全局模型的性能。其直观想法是使用全局类原型作为知识来指导客户端的本地训练。最终,局部目标将与全局最优解一致,从而使FedGPD学习到一个改进的全局模型。实验表明,在代表性基准数据集上,FedGPD在平均准确率方面比先前的最优方法高出0.22%至1.28%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/023d/11130332/04c61abf684a/41598_2024_62908_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验