Suppr超能文献

用于异构联邦学习的无数据知识蒸馏

Data-Free Knowledge Distillation for Heterogeneous Federated Learning.

作者信息

Zhu Zhuangdi, Hong Junyuan, Zhou Jiayu

机构信息

Department of Computer Science and Engineering, Michigan State University, Michigan, USA.

出版信息

Proc Mach Learn Res. 2021 Jul;139:12878-12889.

Abstract

Federated Learning (FL) is a decentralized machine-learning paradigm in which a global server iteratively aggregates the model parameters of local users without accessing their data. User has imposed significant challenges to FL, which can incur drifted global models that are slow to converge. has recently emerged to tackle this issue, by refining the server model using aggregated knowledge from heterogeneous users, other than directly aggregating their model parameters. This approach, however, depends on a proxy dataset, making it impractical unless such prerequisite is satisfied. Moreover, the ensemble knowledge is not fully utilized to guide local model learning, which may in turn affect the quality of the aggregated model. In this work, we propose a approach to address heterogeneous FL, where the server learns a lightweight generator to ensemble user information in a data-free manner, which is then broadcasted to users, regulating local training using the learned knowledge as an inductive bias. Empirical studies powered by theoretical implications show that, our approach facilitates FL with better generalization performance using fewer communication rounds, compared with the state-of-the-art.

摘要

联邦学习(FL)是一种去中心化的机器学习范式,其中全局服务器在不访问本地用户数据的情况下迭代聚合其模型参数。用户给联邦学习带来了重大挑战,这可能导致全局模型漂移且收敛缓慢。最近出现了一种方法来解决这个问题,即通过利用来自异构用户的聚合知识来优化服务器模型,而不是直接聚合他们的模型参数。然而,这种方法依赖于一个代理数据集,除非满足这样的前提条件,否则它是不切实际的。此外,集成知识没有被充分利用来指导本地模型学习,这反过来可能会影响聚合模型的质量。在这项工作中,我们提出了一种解决异构联邦学习的方法,其中服务器学习一个轻量级生成器,以无数据的方式整合用户信息,然后将其广播给用户,使用学到的知识作为归纳偏差来调节本地训练。由理论启示驱动的实证研究表明,与现有技术相比,我们的方法使用更少的通信轮次就能促进联邦学习具有更好的泛化性能。

相似文献

3
Federated Learning With Privacy-Preserving Ensemble Attention Distillation.联邦学习中的隐私保护集成注意力蒸馏
IEEE Trans Med Imaging. 2023 Jul;42(7):2057-2067. doi: 10.1109/TMI.2022.3213244. Epub 2023 Jun 30.
6
Improving Generalization and Personalization in Model-Heterogeneous Federated Learning.
IEEE Trans Neural Netw Learn Syst. 2025 Jan;36(1):88-101. doi: 10.1109/TNNLS.2024.3405190. Epub 2025 Jan 7.
9
Robust Federated Learning for Heterogeneous Model and Data.异构模型和数据的稳健联邦学习。
Int J Neural Syst. 2024 Apr;34(4):2450019. doi: 10.1142/S0129065724500199. Epub 2024 Feb 19.
10
One-shot Federated Learning without server-side training.无服务器端训练的单次联邦学习。
Neural Netw. 2023 Jul;164:203-215. doi: 10.1016/j.neunet.2023.04.035. Epub 2023 Apr 26.

引用本文的文献

1
Efficient federated learning via aggregation of base models.通过基础模型聚合实现高效联邦学习。
PLoS One. 2025 Aug 14;20(8):e0327883. doi: 10.1371/journal.pone.0327883. eCollection 2025.

本文引用的文献

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验