Suppr超能文献

FedLGA:通过局部梯度近似实现联邦学习的系统异构性

FedLGA: Toward System-Heterogeneity of Federated Learning via Local Gradient Approximation.

作者信息

Li Xingyu, Qu Zhe, Tang Bo, Lu Zhuo

出版信息

IEEE Trans Cybern. 2024 Jan;54(1):401-414. doi: 10.1109/TCYB.2023.3247365. Epub 2023 Dec 20.

Abstract

Federated learning (FL) is a decentralized machine learning architecture, which leverages a large number of remote devices to learn a joint model with distributed training data. However, the system-heterogeneity is one major challenge in an FL network to achieve robust distributed learning performance, which comes from two aspects: 1) device-heterogeneity due to the diverse computational capacity among devices and 2) data-heterogeneity due to the nonidentically distributed data across the network. Prior studies addressing the heterogeneous FL issue, for example, FedProx, lack formalization and it remains an open problem. This work first formalizes the system-heterogeneous FL problem and proposes a new algorithm, called federated local gradient approximation (FedLGA), to address this problem by bridging the divergence of local model updates via gradient approximation. To achieve this, FedLGA provides an alternated Hessian estimation method, which only requires extra linear complexity on the aggregator. Theoretically, we show that with a device-heterogeneous ratio ρ , FedLGA achieves convergence rates on non-i.i.d. distributed FL training data for the nonconvex optimization problems with O ([(1+ρ)/√{ENT}] + 1/T) and O ([(1+ρ)√E/√{TK}] + 1/T) for full and partial device participation, respectively, where E is the number of local learning epoch, T is the number of total communication round, N is the total device number, and K is the number of the selected device in one communication round under partially participation scheme. The results of comprehensive experiments on multiple datasets indicate that FedLGA can effectively address the system-heterogeneous problem and outperform current FL methods. Specifically, the performance against the CIFAR-10 dataset shows that, compared with FedAvg, FedLGA improves the model's best testing accuracy from 60.91% to 64.44%.

摘要

联邦学习(FL)是一种去中心化的机器学习架构,它利用大量远程设备,通过分布式训练数据来学习一个联合模型。然而,系统异构性是FL网络中实现稳健分布式学习性能的一个主要挑战,它来自两个方面:1)由于设备间计算能力不同导致的设备异构性;2)由于网络中数据分布不同导致的数据异构性。先前解决异构FL问题的研究,例如FedProx,缺乏形式化,这仍然是一个开放问题。这项工作首先对系统异构的FL问题进行形式化,并提出一种新算法,称为联邦局部梯度近似(FedLGA),通过梯度近似弥合局部模型更新的差异来解决这个问题。为此,FedLGA提供了一种交替海森矩阵估计方法,该方法在聚合器上只需要额外的线性复杂度。从理论上讲,我们表明,对于设备异构率为ρ的情况,对于非凸优化问题,在完全和部分设备参与的情况下,FedLGA在非独立同分布的FL训练数据上分别实现了O([(1 + ρ)/√{ENT}] + 1/T)和O([(1 + ρ)√E/√{TK}] + 1/T)的收敛速度,其中E是局部学习轮数,T是总通信轮数,N是设备总数,K是部分参与方案下一轮通信中选择的设备数量。在多个数据集上的综合实验结果表明,FedLGA可以有效地解决系统异构问题,并且优于当前的FL方法。具体来说,针对CIFAR-10数据集的性能表明,与FedAvg相比,FedLGA将模型的最佳测试准确率从60.91%提高到了64.44%。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验