• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

去中心化联邦平均

Decentralized Federated Averaging.

作者信息

Sun Tao, Li Dongsheng, Wang Bao

出版信息

IEEE Trans Pattern Anal Mach Intell. 2023 Apr;45(4):4289-4301. doi: 10.1109/TPAMI.2022.3196503. Epub 2023 Mar 7.

DOI:10.1109/TPAMI.2022.3196503
PMID:35925850
Abstract

Federated averaging (FedAvg) is a communication-efficient algorithm for distributed training with an enormous number of clients. In FedAvg, clients keep their data locally for privacy protection; a central parameter server is used to communicate between clients. This central server distributes the parameters to each client and collects the updated parameters from clients. FedAvg is mostly studied in centralized fashions, requiring massive communications between the central server and clients, which leads to possible channel blocking. Moreover, attacking the central server can break the whole system's privacy. Indeed, decentralization can significantly reduce the communication of the busiest node (the central one) because all nodes only communicate with their neighbors. To this end, in this paper, we study the decentralized FedAvg with momentum (DFedAvgM), implemented on clients that are connected by an undirected graph. In DFedAvgM, all clients perform stochastic gradient descent with momentum and communicate with their neighbors only. To further reduce the communication cost, we also consider the quantized DFedAvgM. The proposed algorithm involves the mixing matrix, momentum, client training with multiple local iterations, and quantization, introducing extra items in the Lyapunov analysis. Thus, the analysis of this paper is much more challenging than previous decentralized (momentum) SGD or FedAvg. We prove convergence of the (quantized) DFedAvgM under trivial assumptions; the convergence rate can be improved to sublinear when the loss function satisfies the PŁ property. Numerically, we find that the proposed algorithm outperforms FedAvg in both convergence speed and communication cost.

摘要

联邦平均算法(FedAvg)是一种用于大量客户端分布式训练的通信高效算法。在FedAvg中,客户端将数据保存在本地以保护隐私;使用一个中央参数服务器在客户端之间进行通信。这个中央服务器将参数分发给每个客户端,并从客户端收集更新后的参数。FedAvg大多以集中式方式进行研究,这需要中央服务器和客户端之间进行大量通信,这可能导致信道阻塞。此外,攻击中央服务器可能会破坏整个系统的隐私。实际上,去中心化可以显著减少最繁忙节点(中央节点)的通信量,因为所有节点只与它们的邻居通信。为此,在本文中,我们研究了带动量的去中心化FedAvg(DFedAvgM),它在由无向图连接的客户端上实现。在DFedAvgM中,所有客户端都执行带动量的随机梯度下降,并且只与它们的邻居通信。为了进一步降低通信成本,我们还考虑了量化的DFedAvgM。所提出的算法涉及混合矩阵、动量、具有多个本地迭代的客户端训练以及量化,这在李雅普诺夫分析中引入了额外的项。因此,本文的分析比以前的去中心化(带动量)随机梯度下降或FedAvg更具挑战性。我们在平凡假设下证明了(量化的)DFedAvgM的收敛性;当损失函数满足PŁ性质时,收敛速度可以提高到次线性。在数值上,我们发现所提出的算法在收敛速度和通信成本方面都优于FedAvg。

相似文献

1
Decentralized Federated Averaging.去中心化联邦平均
IEEE Trans Pattern Anal Mach Intell. 2023 Apr;45(4):4289-4301. doi: 10.1109/TPAMI.2022.3196503. Epub 2023 Mar 7.
2
Ternary Compression for Communication-Efficient Federated Learning.用于通信高效联邦学习的三元压缩
IEEE Trans Neural Netw Learn Syst. 2022 Mar;33(3):1162-1176. doi: 10.1109/TNNLS.2020.3041185. Epub 2022 Feb 28.
3
A(DP) SGD: Asynchronous Decentralized Parallel Stochastic Gradient Descent With Differential Privacy.异步去中心化并行随机梯度下降与差分隐私。
IEEE Trans Pattern Anal Mach Intell. 2022 Nov;44(11):8036-8047. doi: 10.1109/TPAMI.2021.3107796. Epub 2022 Oct 4.
4
Averaging Is Probably Not the Optimum Way of Aggregating Parameters in Federated Learning.平均法可能不是联邦学习中聚合参数的最佳方式。
Entropy (Basel). 2020 Mar 11;22(3):314. doi: 10.3390/e22030314.
5
FedPSO: Federated Learning Using Particle Swarm Optimization to Reduce Communication Costs.联邦粒子群优化算法:利用粒子群优化进行联邦学习以降低通信成本
Sensors (Basel). 2021 Jan 16;21(2):600. doi: 10.3390/s21020600.
6
Communication-Efficient Federated Deep Learning With Layerwise Asynchronous Model Update and Temporally Weighted Aggregation.基于分层异步模型更新和时间加权聚合的通信高效联邦深度学习
IEEE Trans Neural Netw Learn Syst. 2020 Oct;31(10):4229-4238. doi: 10.1109/TNNLS.2019.2953131. Epub 2019 Dec 30.
7
Federated learning with workload-aware client scheduling in heterogeneous systems.异构系统中具有工作负载感知的客户端调度的联邦学习。
Neural Netw. 2022 Oct;154:560-573. doi: 10.1016/j.neunet.2022.07.030. Epub 2022 Aug 1.
8
Federated Learning with Pareto Optimality for Resource Efficiency and Fast Model Convergence in Mobile Environments.用于移动环境中资源效率和快速模型收敛的具有帕累托最优性的联邦学习
Sensors (Basel). 2024 Apr 12;24(8):2476. doi: 10.3390/s24082476.
9
Lazily Aggregated Quantized Gradient Innovation for Communication-Efficient Federated Learning.用于通信高效联邦学习的懒惰聚合量化梯度创新
IEEE Trans Pattern Anal Mach Intell. 2022 Apr;44(4):2031-2044. doi: 10.1109/TPAMI.2020.3033286. Epub 2022 Mar 4.
10
A New Look and Convergence Rate of Federated Multitask Learning With Laplacian Regularization.基于拉普拉斯正则化的联邦多任务学习的新视角与收敛速度
IEEE Trans Neural Netw Learn Syst. 2024 Jun;35(6):8075-8085. doi: 10.1109/TNNLS.2022.3224252. Epub 2024 Jun 3.

引用本文的文献

1
Balancing centralisation and decentralisation in federated learning for Earth Observation-based agricultural predictions.基于地球观测的农业预测联邦学习中集中化与去中心化的平衡
Sci Rep. 2025 Mar 26;15(1):10454. doi: 10.1038/s41598-025-94244-2.
2
FedDL: personalized federated deep learning for enhanced detection and classification of diabetic retinopathy.FedDL:用于增强糖尿病视网膜病变检测与分类的个性化联邦深度学习
PeerJ Comput Sci. 2024 Dec 23;10:e2508. doi: 10.7717/peerj-cs.2508. eCollection 2024.
3
Federated Learning in Smart Healthcare: A Comprehensive Review on Privacy, Security, and Predictive Analytics with IoT Integration.
智能医疗中的联邦学习:关于隐私、安全以及与物联网集成的预测分析的全面综述
Healthcare (Basel). 2024 Dec 22;12(24):2587. doi: 10.3390/healthcare12242587.
4
FedDNA: Federated learning using dynamic node alignment.FedDNA:使用动态节点对齐的联邦学习。
PLoS One. 2023 Jul 3;18(7):e0288157. doi: 10.1371/journal.pone.0288157. eCollection 2023.
5
Federated learning for preserving data privacy in collaborative healthcare research.用于在协作医疗研究中保护数据隐私的联邦学习。
Digit Health. 2022 Oct 27;8:20552076221134455. doi: 10.1177/20552076221134455. eCollection 2022 Jan-Dec.
6
Isolate sets partition benefits community detection of parallel Louvain method.孤立集划分有助于并行 Louvain 方法的社区检测。
Sci Rep. 2022 May 17;12(1):8248. doi: 10.1038/s41598-022-11987-y.