• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于通信高效联邦学习的懒惰聚合量化梯度创新

Lazily Aggregated Quantized Gradient Innovation for Communication-Efficient Federated Learning.

作者信息

Sun Jun, Chen Tianyi, Giannakis Georgios B, Yang Qinmin, Yang Zaiyue

出版信息

IEEE Trans Pattern Anal Mach Intell. 2022 Apr;44(4):2031-2044. doi: 10.1109/TPAMI.2020.3033286. Epub 2022 Mar 4.

DOI:10.1109/TPAMI.2020.3033286
PMID:33095709
Abstract

This paper focuses on communication-efficient federated learning problem, and develops a novel distributed quantized gradient approach, which is characterized by adaptive communications of the quantized gradients. Specifically, the federated learning builds upon the server-worker infrastructure, where the workers calculate local gradients and upload them to the server; then the server obtain the global gradient by aggregating all the local gradients and utilizes it to update the model parameter. The key idea to save communications from the worker to the server is to quantize gradients as well as skip less informative quantized gradient communications by reusing previous gradients. Quantizing and skipping result in 'lazy' worker-server communications, which justifies the term Lazily Aggregated Quantized (LAQ) gradient. Theoretically, the LAQ algorithm achieves the same linear convergence as the gradient descent in the strongly convex case, while effecting major savings in the communication in terms of transmitted bits and communication rounds. Empirically, extensive experiments using realistic data corroborate a significant communication reduction compared with state-of-the-art gradient- and stochastic gradient-based algorithms.

摘要

本文聚焦于通信高效的联邦学习问题,并开发了一种新颖的分布式量化梯度方法,其特点是对量化梯度进行自适应通信。具体而言,联邦学习基于服务器-工作节点架构构建,其中工作节点计算局部梯度并将其上传至服务器;然后服务器通过聚合所有局部梯度来获得全局梯度,并利用全局梯度更新模型参数。减少从工作节点到服务器通信量的关键思想是对梯度进行量化,并通过重用先前的梯度来跳过信息量较少的量化梯度通信。量化和跳过导致工作节点-服务器之间的通信变得“懒惰”,这也解释了“懒惰聚合量化(LAQ)梯度”这一术语。从理论上讲,LAQ算法在强凸情况下实现了与梯度下降相同的线性收敛,同时在传输比特数和通信轮数方面大幅节省了通信量。从实验上看,使用实际数据进行的大量实验证实,与基于梯度和随机梯度的现有算法相比,通信量显著减少。

相似文献

1
Lazily Aggregated Quantized Gradient Innovation for Communication-Efficient Federated Learning.用于通信高效联邦学习的懒惰聚合量化梯度创新
IEEE Trans Pattern Anal Mach Intell. 2022 Apr;44(4):2031-2044. doi: 10.1109/TPAMI.2020.3033286. Epub 2022 Mar 4.
2
Communication-efficient distributed cubic Newton with compressed lazy Hessian.带压缩惰性海森的通信高效分布式三次牛顿法
Neural Netw. 2024 Jun;174:106212. doi: 10.1016/j.neunet.2024.106212. Epub 2024 Feb 27.
3
Communication-Efficient Nonconvex Federated Learning With Error Feedback for Uplink and Downlink.
IEEE Trans Neural Netw Learn Syst. 2025 Jan;36(1):1003-1014. doi: 10.1109/TNNLS.2023.3333804. Epub 2025 Jan 7.
4
Communication-Censored Distributed Stochastic Gradient Descent.通信受限分布式随机梯度下降
IEEE Trans Neural Netw Learn Syst. 2022 Nov;33(11):6831-6843. doi: 10.1109/TNNLS.2021.3083655. Epub 2022 Oct 27.
5
LAGC: Lazily Aggregated Gradient Coding for Straggler-Tolerant and Communication-Efficient Distributed Learning.LAGC:用于容忍稀疏和提高通信效率的分布式学习的惰性聚合梯度编码。
IEEE Trans Neural Netw Learn Syst. 2021 Mar;32(3):962-974. doi: 10.1109/TNNLS.2020.2979762. Epub 2021 Mar 1.
6
Two-layer accumulated quantized compression for communication-efficient federated learning: TLAQC.用于高效通信联邦学习的两层累积量化压缩:TLAQC
Sci Rep. 2023 Jul 19;13(1):11658. doi: 10.1038/s41598-023-38916-x.
7
Communication-Efficient Federated Deep Learning With Layerwise Asynchronous Model Update and Temporally Weighted Aggregation.基于分层异步模型更新和时间加权聚合的通信高效联邦深度学习
IEEE Trans Neural Netw Learn Syst. 2020 Oct;31(10):4229-4238. doi: 10.1109/TNNLS.2019.2953131. Epub 2019 Dec 30.
8
Efficient Federated Learning Via Local Adaptive Amended Optimizer With Linear Speedup.通过具有线性加速的局部自适应修正优化器实现高效联邦学习
IEEE Trans Pattern Anal Mach Intell. 2023 Dec;45(12):14453-14464. doi: 10.1109/TPAMI.2023.3300886. Epub 2023 Nov 3.
9
Decentralized Federated Averaging.去中心化联邦平均
IEEE Trans Pattern Anal Mach Intell. 2023 Apr;45(4):4289-4301. doi: 10.1109/TPAMI.2022.3196503. Epub 2023 Mar 7.
10
Efficient Gradient Updating Strategies with Adaptive Power Allocation for Federated Learning over Wireless Backhaul.基于无线回程的联邦学习中具有自适应功率分配的高效梯度更新策略
Sensors (Basel). 2021 Oct 13;21(20):6791. doi: 10.3390/s21206791.

引用本文的文献

1
Reviewing Federated Machine Learning and Its Use in Diseases Prediction.综述联邦机器学习及其在疾病预测中的应用。
Sensors (Basel). 2023 Feb 13;23(4):2112. doi: 10.3390/s23042112.
2
Efficient Asynchronous Federated Learning for AUV Swarm.用于水下自主航行器集群的高效异步联邦学习
Sensors (Basel). 2022 Nov 11;22(22):8727. doi: 10.3390/s22228727.