• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Communication-efficient federated learning.高效通信的联邦学习。
Proc Natl Acad Sci U S A. 2021 Apr 27;118(17). doi: 10.1073/pnas.2024789118.
2
Wireless Network Optimization for Federated Learning with Model Compression in Hybrid VLC/RF Systems.混合可见光通信/射频系统中基于模型压缩的联邦学习无线网络优化
Entropy (Basel). 2021 Oct 27;23(11):1413. doi: 10.3390/e23111413.
3
OnDev-LCT: On-Device Lightweight Convolutional Transformers towards federated learning.OnDev-LCT:面向联邦学习的设备端轻量级卷积变压器
Neural Netw. 2024 Feb;170:635-649. doi: 10.1016/j.neunet.2023.11.044. Epub 2023 Nov 23.
4
Model Pruning Enables Efficient Federated Learning on Edge Devices.模型剪枝助力边缘设备实现高效联邦学习。
IEEE Trans Neural Netw Learn Syst. 2023 Dec;34(12):10374-10386. doi: 10.1109/TNNLS.2022.3166101. Epub 2023 Nov 30.
5
FedDdrl: Federated Double Deep Reinforcement Learning for Heterogeneous IoT with Adaptive Early Client Termination and Local Epoch Adjustment.FedDdrl:具有自适应早期客户端终止和本地 epoch 调整的异构物联网联邦双深度强化学习。
Sensors (Basel). 2023 Feb 23;23(5):2494. doi: 10.3390/s23052494.
6
Federated Learning with Pareto Optimality for Resource Efficiency and Fast Model Convergence in Mobile Environments.用于移动环境中资源效率和快速模型收敛的具有帕累托最优性的联邦学习
Sensors (Basel). 2024 Apr 12;24(8):2476. doi: 10.3390/s24082476.
7
A Simple Federated Learning-Based Scheme for Security Enhancement Over Internet of Medical Things.一种基于简单联邦学习的医疗物联网安全增强方案。
IEEE J Biomed Health Inform. 2023 Feb;27(2):652-663. doi: 10.1109/JBHI.2022.3187471. Epub 2023 Feb 3.
8
Resilient and Communication Efficient Learning for Heterogeneous Federated Systems.异构联邦系统的弹性与通信高效学习
Proc Mach Learn Res. 2022 Jul;162:27504-27526.
9
Optimal directed acyclic graph federated learning model for energy-efficient IoT communication networks.用于节能物联网通信网络的最优有向无环图联邦学习模型
Sci Rep. 2024 Sep 28;14(1):22525. doi: 10.1038/s41598-024-71995-y.
10
Sustainable Resource Allocation and Reduce Latency Based on Federated-Learning-Enabled Digital Twin in IoT Devices.基于物联网设备中启用联邦学习的数字孪生实现可持续资源分配与降低延迟
Sensors (Basel). 2023 Aug 18;23(16):7262. doi: 10.3390/s23167262.

引用本文的文献

1
Adaptive information-constrained mapping for feature compression in edge AI and federated systems.边缘人工智能和联邦系统中用于特征压缩的自适应信息约束映射
Sci Rep. 2025 Aug 22;15(1):30915. doi: 10.1038/s41598-025-16604-2.
2
Revolutionizing healthcare data analytics with federated learning: A comprehensive survey of applications, systems, and future directions.利用联邦学习革新医疗数据分析:应用、系统及未来方向的全面综述
Comput Struct Biotechnol J. 2025 Jun 11;28:217-238. doi: 10.1016/j.csbj.2025.06.009. eCollection 2025.
3
Introducing edge intelligence to smart meters via federated split learning.通过联邦分割学习将边缘智能引入智能电表。
Nat Commun. 2024 Oct 19;15(1):9044. doi: 10.1038/s41467-024-53352-9.
4
Limitations and Future Aspects of Communication Costs in Federated Learning: A Survey.联邦学习中通信成本的局限性与未来展望:一项综述
Sensors (Basel). 2023 Aug 23;23(17):7358. doi: 10.3390/s23177358.
5
Lead federated neuromorphic learning for wireless edge artificial intelligence.领导联邦神经形态学习以实现无线边缘人工智能。
Nat Commun. 2022 Jul 25;13(1):4269. doi: 10.1038/s41467-022-32020-w.
6
Federated Learning in Edge Computing: A Systematic Survey.边缘计算中的联邦学习:系统综述。
Sensors (Basel). 2022 Jan 7;22(2):450. doi: 10.3390/s22020450.
7
Wireless Network Optimization for Federated Learning with Model Compression in Hybrid VLC/RF Systems.混合可见光通信/射频系统中基于模型压缩的联邦学习无线网络优化
Entropy (Basel). 2021 Oct 27;23(11):1413. doi: 10.3390/e23111413.

本文引用的文献

1
Classification of Electromyographic Hand Gesture Signals Using Modified Fuzzy C-Means Clustering and Two-Step Machine Learning Approach.基于改进的模糊 C 均值聚类和两步机器学习方法对手部肌电手势信号的分类。
IEEE Trans Neural Syst Rehabil Eng. 2020 Jun;28(6):1428-1435. doi: 10.1109/TNSRE.2020.2986884. Epub 2020 Apr 13.
2
Deep Learning for Computer Vision: A Brief Review.深度学习在计算机视觉中的应用综述
Comput Intell Neurosci. 2018 Feb 1;2018:7068349. doi: 10.1155/2018/7068349. eCollection 2018.

高效通信的联邦学习。

Communication-efficient federated learning.

机构信息

Shenzhen Research Institute of Big Data, The Chinese University of Hong Kong, Shenzhen 518172, China.

Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544.

出版信息

Proc Natl Acad Sci U S A. 2021 Apr 27;118(17). doi: 10.1073/pnas.2024789118.

DOI:10.1073/pnas.2024789118
PMID:33888586
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8092601/
Abstract

Federated learning (FL) enables edge devices, such as Internet of Things devices (e.g., sensors), servers, and institutions (e.g., hospitals), to collaboratively train a machine learning (ML) model without sharing their private data. FL requires devices to exchange their ML parameters iteratively, and thus the time it requires to jointly learn a reliable model depends not only on the number of training steps but also on the ML parameter transmission time per step. In practice, FL parameter transmissions are often carried out by a multitude of participating devices over resource-limited communication networks, for example, wireless networks with limited bandwidth and power. Therefore, the repeated FL parameter transmission from edge devices induces a notable delay, which can be larger than the ML model training time by orders of magnitude. Hence, communication delay constitutes a major bottleneck in FL. Here, a communication-efficient FL framework is proposed to jointly improve the FL convergence time and the training loss. In this framework, a probabilistic device selection scheme is designed such that the devices that can significantly improve the convergence speed and training loss have higher probabilities of being selected for ML model transmission. To further reduce the FL convergence time, a quantization method is proposed to reduce the volume of the model parameters exchanged among devices, and an efficient wireless resource allocation scheme is developed. Simulation results show that the proposed FL framework can improve the identification accuracy and convergence time by up to 3.6% and 87% compared to standard FL.

摘要

联邦学习(FL)使物联网设备(如传感器)、服务器和机构(如医院)等边缘设备能够在不共享其私有数据的情况下协同训练机器学习(ML)模型。FL 需要设备迭代地交换他们的 ML 参数,因此共同学习可靠模型所需的时间不仅取决于训练步骤的数量,还取决于每个步骤的 ML 参数传输时间。在实践中,FL 参数传输通常由大量参与设备通过资源有限的通信网络进行,例如带宽和功率有限的无线网络。因此,来自边缘设备的重复 FL 参数传输会引起显著的延迟,其延迟可能比 ML 模型训练时间大几个数量级。因此,通信延迟是 FL 的主要瓶颈。在这里,提出了一种通信高效的 FL 框架,以共同提高 FL 的收敛时间和训练损失。在该框架中,设计了一种概率设备选择方案,使得能够显著提高收敛速度和训练损失的设备被选择用于 ML 模型传输的概率更高。为了进一步减少 FL 的收敛时间,提出了一种量化方法来减少设备之间交换的模型参数的数量,并开发了一种有效的无线资源分配方案。仿真结果表明,与标准 FL 相比,所提出的 FL 框架可以将识别精度和收敛时间提高高达 3.6%和 87%。