• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

多内核在线联邦学习的高效通信随机算法。

Communication-Efficient Randomized Algorithm for Multi-Kernel Online Federated Learning.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2022 Dec;44(12):9872-9886. doi: 10.1109/TPAMI.2021.3129809. Epub 2022 Nov 7.

DOI:10.1109/TPAMI.2021.3129809
PMID:34813467
Abstract

Online federated learning (OFL) is a promising framework to learn a sequence of global functions from distributed sequential data at local devices. In this framework, we first introduce a single kernel-based OFL (termed S-KOFL) by incorporating random-feature (RF) approximation, online gradient descent (OGD), and federated averaging (FedAvg). As manifested in the centralized counterpart, an extension to multi-kernel method is necessary. Harnessing the extension principle in the centralized method, we construct a vanilla multi-kernel algorithm (termed vM-KOFL) and prove its asymptotic optimality. However, it is not practical as the communication overhead grows linearly with the size of a kernel dictionary. Moreover, this problem cannot be addressed via the existing communication-efficient techniques (e.g., quantization and sparsification) in the conventional federated learning. Our major contribution is to propose a novel randomized algorithm (named eM-KOFL), which exhibits similar performance to vM-KOFL while maintaining low communication cost. We theoretically prove that eM-KOFL achieves an optimal sublinear regret bound. Mimicking the key concept of eM-KOFL in an efficient way, we propose a more practical pM-KOFL having the same communication overhead as S-KOFL. Via numerical tests with real datasets, we demonstrate that pM-KOFL yields the almost same performance as vM-KOFL (or eM-KOFL) on various online learning tasks.

摘要

在线联邦学习 (OFL) 是一种很有前途的框架,可以从本地设备上分布的顺序数据中学习一系列全局函数。在这个框架中,我们首先通过引入随机特征 (RF) 逼近、在线梯度下降 (OGD) 和联邦平均 (FedAvg) 来引入单内核的 OFL(称为 S-KOFL)。与集中式对应物一样,需要扩展到多核方法。利用集中式方法中的扩展原理,我们构建了一个普通的多核算法(称为 vM-KOFL),并证明了它的渐近最优性。然而,由于通信开销随核字典的大小呈线性增长,因此它并不实用。此外,这个问题不能通过传统联邦学习中现有的通信高效技术(例如量化和稀疏化)来解决。我们的主要贡献是提出了一种新的随机算法(称为 eM-KOFL),它在保持低通信成本的同时,表现出与 vM-KOFL 相似的性能。我们从理论上证明了 eM-KOFL 达到了最优的次线性遗憾界。我们以有效的方式模仿 eM-KOFL 的关键概念,提出了一种更实用的 pM-KOFL,其通信开销与 S-KOFL 相同。通过对真实数据集的数值测试,我们证明了在各种在线学习任务中,pM-KOFL 的性能与 vM-KOFL(或 eM-KOFL)几乎相同。

相似文献

1
Communication-Efficient Randomized Algorithm for Multi-Kernel Online Federated Learning.多内核在线联邦学习的高效通信随机算法。
IEEE Trans Pattern Anal Mach Intell. 2022 Dec;44(12):9872-9886. doi: 10.1109/TPAMI.2021.3129809. Epub 2022 Nov 7.
2
Tighter Regret Analysis and Optimization of Online Federated Learning.在线联邦学习的更严格遗憾分析与优化
IEEE Trans Pattern Anal Mach Intell. 2023 Dec;45(12):15772-15789. doi: 10.1109/TPAMI.2023.3316672. Epub 2023 Nov 3.
3
Distributed Online Learning With Multiple Kernels.基于多核的分布式在线学习
IEEE Trans Neural Netw Learn Syst. 2023 Mar;34(3):1263-1277. doi: 10.1109/TNNLS.2021.3105146. Epub 2023 Feb 28.
4
QC-ODKLA: Quantized and Communication- Censored Online Decentralized Kernel Learning via Linearized ADMM.QC-ODKLA:通过线性化交替方向乘子法实现的量化与通信受限在线分布式核学习
IEEE Trans Neural Netw Learn Syst. 2024 Dec;35(12):17987-17999. doi: 10.1109/TNNLS.2023.3310499. Epub 2024 Dec 2.
5
Robust and Communication-Efficient Federated Learning From Non-i.i.d. Data.来自非独立同分布数据的稳健且通信高效的联邦学习
IEEE Trans Neural Netw Learn Syst. 2020 Sep;31(9):3400-3413. doi: 10.1109/TNNLS.2019.2944481. Epub 2019 Nov 1.
6
Ternary Compression for Communication-Efficient Federated Learning.用于通信高效联邦学习的三元压缩
IEEE Trans Neural Netw Learn Syst. 2022 Mar;33(3):1162-1176. doi: 10.1109/TNNLS.2020.3041185. Epub 2022 Feb 28.
7
Online Multikernel Learning Method via Online Biconvex Optimization.通过在线双凸优化的在线多核学习方法
IEEE Trans Neural Netw Learn Syst. 2024 Nov;35(11):16630-16643. doi: 10.1109/TNNLS.2023.3296895. Epub 2024 Oct 29.
8
Decentralized Federated Averaging.去中心化联邦平均
IEEE Trans Pattern Anal Mach Intell. 2023 Apr;45(4):4289-4301. doi: 10.1109/TPAMI.2022.3196503. Epub 2023 Mar 7.
9
Active Learning With Multiple Kernels.基于多核的主动学习
IEEE Trans Neural Netw Learn Syst. 2022 Jul;33(7):2980-2994. doi: 10.1109/TNNLS.2020.3047953. Epub 2022 Jul 6.
10
Online selective kernel-based temporal difference learning.在线选择性核时变差分学习。
IEEE Trans Neural Netw Learn Syst. 2013 Dec;24(12):1944-56. doi: 10.1109/TNNLS.2013.2270561.

引用本文的文献

1
Privacy by Projection: Federated Population Density Estimation by Projecting on Random Features.通过投影实现隐私保护:基于随机特征投影的联邦人口密度估计
Proc Priv Enhanc Technol. 2023 Jul;2023(1):309-324. doi: 10.56553/popets-2023-0019.