Suppr超能文献

通过无限陈旧性解决联邦学习中交织的数据和设备异构性问题。

Tackling Intertwined Data and Device Heterogeneities in Federated Learning with Unlimited Staleness.

作者信息

Wang Haoming, Gao Wei

机构信息

University of Pittsburgh.

出版信息

Proc AAAI Conf Artif Intell. 2025;39(20):21080-21089. doi: 10.1609/aaai.v39i20.35405. Epub 2025 Apr 11.

Abstract

Federated Learning (FL) can be affected by data and device heterogeneities, caused by clients' different local data distributions and latencies in uploading model updates (i.e., staleness). Traditional schemes consider these heterogeneities as two separate and independent aspects, but this assumption is unrealistic in practical FL scenarios where these heterogeneities are intertwined. In these cases, traditional FL schemes are ineffective, and a better approach is to convert a stale model update into a unstale one. In this paper, we present a new FL framework that ensures the accuracy and computational efficiency of this conversion, hence effectively tackling the intertwined heterogeneities that may cause unlimited staleness in model updates. Our basic idea is to estimate the distributions of clients' local training data from their uploaded stale model updates, and use these estimations to compute unstale client model updates. In this way, our approach does not require any auxiliary dataset nor the clients' local models to be fully trained, and does not incur any additional computation or communication overhead at client devices. We compared our approach with the existing FL strategies on mainstream datasets and models, and showed that our approach can improve the trained model accuracy by up to 25% and reduce the number of required training epochs by up to 35%. Source codes can be found at: https://github.com/pittisl/FL-with-intertwined-heterogeneity.

摘要

联邦学习(FL)可能会受到数据和设备异构性的影响,这些异构性是由客户端不同的本地数据分布以及上传模型更新时的延迟(即陈旧性)所导致的。传统方案将这些异构性视为两个相互独立的方面,但在实际的联邦学习场景中,这种假设并不现实,因为在这些场景中,这些异构性是相互交织的。在这种情况下,传统的联邦学习方案效果不佳,更好的方法是将陈旧的模型更新转换为非陈旧的更新。在本文中,我们提出了一种新的联邦学习框架,该框架确保了这种转换的准确性和计算效率,从而有效地解决了可能导致模型更新中出现无限陈旧性的相互交织的异构性问题。我们的基本思想是从客户端上传的陈旧模型更新中估计其本地训练数据的分布,并利用这些估计来计算非陈旧的客户端模型更新。通过这种方式,我们的方法不需要任何辅助数据集,也不需要客户端的本地模型完全训练完成,并且不会在客户端设备上产生任何额外的计算或通信开销。我们在主流数据集和模型上,将我们的方法与现有的联邦学习策略进行了比较,结果表明我们的方法可以将训练模型的准确率提高多达25%,并将所需的训练轮数减少多达35%。源代码可在以下网址找到:https://github.com/pittisl/FL-with-intertwined-heterogeneity

相似文献

2
Federated Learning Based on Model Discrepancy and Variance Reduction.
IEEE Trans Neural Netw Learn Syst. 2025 Jun;36(6):10407-10421. doi: 10.1109/TNNLS.2024.3517658.
4
PGFed: Personalize Each Client's Global Objective for Federated Learning.PGFed:为联邦学习个性化每个客户端的全局目标。
Proc IEEE Int Conf Comput Vis. 2023 Oct;2023:3923-3933. doi: 10.1109/iccv51070.2023.00365. Epub 2024 Jan 15.
6
Federated Noisy Client Learning.联邦噪声客户端学习
IEEE Trans Neural Netw Learn Syst. 2025 Jan;36(1):1799-1812. doi: 10.1109/TNNLS.2023.3336050. Epub 2025 Jan 7.
7
Allosteric Feature Collaboration for Model-Heterogeneous Federated Learning.用于模型异构联邦学习的变构特征协作
IEEE Trans Neural Netw Learn Syst. 2025 Feb;36(2):3042-3056. doi: 10.1109/TNNLS.2023.3344084. Epub 2025 Feb 6.
9
FedPD: Defending federated prototype learning against backdoor attacks.FedPD:抵御后门攻击的联邦原型学习
Neural Netw. 2025 Apr;184:107016. doi: 10.1016/j.neunet.2024.107016. Epub 2024 Dec 10.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验