Suppr超能文献

基于知识蒸馏的高效通信联邦学习。

Communication-efficient federated learning via knowledge distillation.

机构信息

Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China.

Microsoft Research Asia, Beijing, 100080, China.

出版信息

Nat Commun. 2022 Apr 19;13(1):2032. doi: 10.1038/s41467-022-29763-x.

Abstract

Federated learning is a privacy-preserving machine learning technique to train intelligent models from decentralized data, which enables exploiting private data by communicating local model updates in each iteration of model learning rather than the raw data. However, model updates can be extremely large if they contain numerous parameters, and many rounds of communication are needed for model training. The huge communication cost in federated learning leads to heavy overheads on clients and high environmental burdens. Here, we present a federated learning method named FedKD that is both communication-efficient and effective, based on adaptive mutual knowledge distillation and dynamic gradient compression techniques. FedKD is validated on three different scenarios that need privacy protection, showing that it maximally can reduce 94.89% of communication cost and achieve competitive results with centralized model learning. FedKD provides a potential to efficiently deploy privacy-preserving intelligent systems in many scenarios, such as intelligent healthcare and personalization.

摘要

联邦学习是一种隐私保护的机器学习技术,用于从分散的数据中训练智能模型,它通过在模型学习的每一轮中传递本地模型更新,而不是原始数据,从而实现对私有数据的利用。然而,如果模型更新包含大量参数,那么它们可能会非常大,并且模型训练需要多轮通信。联邦学习中的巨大通信成本给客户端带来了沉重的开销和高环境负担。在这里,我们提出了一种名为 FedKD 的联邦学习方法,该方法基于自适应互知识蒸馏和动态梯度压缩技术,既高效又有效。FedKD 在三个需要隐私保护的不同场景中进行了验证,结果表明它可以最大程度地减少 94.89%的通信成本,并取得与集中式模型学习相当的结果。FedKD 为在许多场景中高效部署隐私保护智能系统提供了一种可能,例如智能医疗和个性化。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8085/9018897/f7ce76edbf57/41467_2022_29763_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验