Suppr超能文献

多内核在线联邦学习的高效通信随机算法。

Communication-Efficient Randomized Algorithm for Multi-Kernel Online Federated Learning.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2022 Dec;44(12):9872-9886. doi: 10.1109/TPAMI.2021.3129809. Epub 2022 Nov 7.

Abstract

Online federated learning (OFL) is a promising framework to learn a sequence of global functions from distributed sequential data at local devices. In this framework, we first introduce a single kernel-based OFL (termed S-KOFL) by incorporating random-feature (RF) approximation, online gradient descent (OGD), and federated averaging (FedAvg). As manifested in the centralized counterpart, an extension to multi-kernel method is necessary. Harnessing the extension principle in the centralized method, we construct a vanilla multi-kernel algorithm (termed vM-KOFL) and prove its asymptotic optimality. However, it is not practical as the communication overhead grows linearly with the size of a kernel dictionary. Moreover, this problem cannot be addressed via the existing communication-efficient techniques (e.g., quantization and sparsification) in the conventional federated learning. Our major contribution is to propose a novel randomized algorithm (named eM-KOFL), which exhibits similar performance to vM-KOFL while maintaining low communication cost. We theoretically prove that eM-KOFL achieves an optimal sublinear regret bound. Mimicking the key concept of eM-KOFL in an efficient way, we propose a more practical pM-KOFL having the same communication overhead as S-KOFL. Via numerical tests with real datasets, we demonstrate that pM-KOFL yields the almost same performance as vM-KOFL (or eM-KOFL) on various online learning tasks.

摘要

在线联邦学习 (OFL) 是一种很有前途的框架,可以从本地设备上分布的顺序数据中学习一系列全局函数。在这个框架中,我们首先通过引入随机特征 (RF) 逼近、在线梯度下降 (OGD) 和联邦平均 (FedAvg) 来引入单内核的 OFL(称为 S-KOFL)。与集中式对应物一样,需要扩展到多核方法。利用集中式方法中的扩展原理,我们构建了一个普通的多核算法(称为 vM-KOFL),并证明了它的渐近最优性。然而,由于通信开销随核字典的大小呈线性增长,因此它并不实用。此外,这个问题不能通过传统联邦学习中现有的通信高效技术(例如量化和稀疏化)来解决。我们的主要贡献是提出了一种新的随机算法(称为 eM-KOFL),它在保持低通信成本的同时,表现出与 vM-KOFL 相似的性能。我们从理论上证明了 eM-KOFL 达到了最优的次线性遗憾界。我们以有效的方式模仿 eM-KOFL 的关键概念,提出了一种更实用的 pM-KOFL,其通信开销与 S-KOFL 相同。通过对真实数据集的数值测试,我们证明了在各种在线学习任务中,pM-KOFL 的性能与 vM-KOFL(或 eM-KOFL)几乎相同。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验