• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于高效通信联邦学习的两层累积量化压缩:TLAQC

Two-layer accumulated quantized compression for communication-efficient federated learning: TLAQC.

作者信息

Ren Yaoyao, Cao Yu, Ye Chengyin, Cheng Xu

机构信息

School of Information and Control Engineering, Liaoning Petrochemical University, Fushun, Liaoning, People's Republic of China.

School of Economics and Management, Shenyang Agricultural University, Shengyang, Liaoning, People's Republic of China.

出版信息

Sci Rep. 2023 Jul 19;13(1):11658. doi: 10.1038/s41598-023-38916-x.

DOI:10.1038/s41598-023-38916-x
PMID:37468562
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10356777/
Abstract

Federated learning enables multiple nodes to perform local computations and collaborate to complete machine learning tasks without centralizing private data of nodes. However, the frequent model gradients upload/download operations required by the framework result in high communication costs, which have become the main bottleneck for federated learning as deep models scale up, hindering its performance. In this paper, we propose a two-layer accumulated quantized compression algorithm (TLAQC) that effectively reduces the communication cost of federated learning. TLAQC achieves this by reducing both the cost of individual communication and the number of global communication rounds. TLAQC introduces a revised quantization method called RQSGD, which employs zero-value correction to mitigate ineffective quantization phenomena and minimize average quantization errors. Additionally, TLAQC reduces the frequency of gradient information uploads through an adaptive threshold and parameter self-inspection mechanism, further reducing communication costs. It also accumulates quantization errors and retained weight deltas to compensate for gradient knowledge loss. Through quantization correction and two-layer accumulation, TLAQC significantly reduces precision loss caused by communication compression. Experimental results demonstrate that RQSGD achieves an incidence of ineffective quantization as low as 0.003% and reduces the average quantization error to 1.6 × [Formula: see text]. Compared to full-precision FedAVG, TLAQC compresses uploaded traffic to only 6.73% while increasing accuracy by 1.25%.

摘要

联邦学习使多个节点能够在不集中节点私有数据的情况下执行本地计算并协作完成机器学习任务。然而,该框架所需的频繁模型梯度上传/下载操作导致了高昂的通信成本,随着深度模型规模的扩大,这已成为联邦学习的主要瓶颈,阻碍了其性能。在本文中,我们提出了一种两层累积量化压缩算法(TLAQC),该算法有效地降低了联邦学习的通信成本。TLAQC通过降低单次通信成本和全局通信轮数来实现这一目标。TLAQC引入了一种名为RQSGD的改进量化方法,该方法采用零值校正来减轻无效量化现象并最小化平均量化误差。此外,TLAQC通过自适应阈值和参数自检机制降低梯度信息上传的频率,进一步降低通信成本。它还累积量化误差和保留的权重增量以补偿梯度知识损失。通过量化校正和两层累积,TLAQC显著降低了通信压缩导致的精度损失。实验结果表明,RQSGD实现了低至0.003%的无效量化发生率,并将平均量化误差降低至1.6×[公式:见原文]。与全精度FedAVG相比,TLAQC将上传流量压缩至仅6.73%,同时提高了1.25%的准确率。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1620/10356777/17c061c23ce8/41598_2023_38916_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1620/10356777/86ce7641503b/41598_2023_38916_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1620/10356777/8a47ed983a8d/41598_2023_38916_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1620/10356777/1a26d61542bf/41598_2023_38916_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1620/10356777/c17e62db464b/41598_2023_38916_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1620/10356777/17c061c23ce8/41598_2023_38916_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1620/10356777/86ce7641503b/41598_2023_38916_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1620/10356777/8a47ed983a8d/41598_2023_38916_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1620/10356777/1a26d61542bf/41598_2023_38916_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1620/10356777/c17e62db464b/41598_2023_38916_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1620/10356777/17c061c23ce8/41598_2023_38916_Fig5_HTML.jpg

相似文献

1
Two-layer accumulated quantized compression for communication-efficient federated learning: TLAQC.用于高效通信联邦学习的两层累积量化压缩:TLAQC
Sci Rep. 2023 Jul 19;13(1):11658. doi: 10.1038/s41598-023-38916-x.
2
Ternary Compression for Communication-Efficient Federated Learning.用于通信高效联邦学习的三元压缩
IEEE Trans Neural Netw Learn Syst. 2022 Mar;33(3):1162-1176. doi: 10.1109/TNNLS.2020.3041185. Epub 2022 Feb 28.
3
Lazily Aggregated Quantized Gradient Innovation for Communication-Efficient Federated Learning.用于通信高效联邦学习的懒惰聚合量化梯度创新
IEEE Trans Pattern Anal Mach Intell. 2022 Apr;44(4):2031-2044. doi: 10.1109/TPAMI.2020.3033286. Epub 2022 Mar 4.
4
Advancing Federated Learning through Verifiable Computations and Homomorphic Encryption.通过可验证计算和同态加密推进联邦学习。
Entropy (Basel). 2023 Nov 16;25(11):1550. doi: 10.3390/e25111550.
5
Decentralized Federated Averaging.去中心化联邦平均
IEEE Trans Pattern Anal Mach Intell. 2023 Apr;45(4):4289-4301. doi: 10.1109/TPAMI.2022.3196503. Epub 2023 Mar 7.
6
Towards Efficient Federated Learning: Layer-Wise Pruning-Quantization Scheme and Coding Design.迈向高效联邦学习:逐层剪枝量化方案与编码设计
Entropy (Basel). 2023 Aug 14;25(8):1205. doi: 10.3390/e25081205.
7
Communication-Efficient Randomized Algorithm for Multi-Kernel Online Federated Learning.多内核在线联邦学习的高效通信随机算法。
IEEE Trans Pattern Anal Mach Intell. 2022 Dec;44(12):9872-9886. doi: 10.1109/TPAMI.2021.3129809. Epub 2022 Nov 7.
8
Robust and Communication-Efficient Federated Learning From Non-i.i.d. Data.来自非独立同分布数据的稳健且通信高效的联邦学习
IEEE Trans Neural Netw Learn Syst. 2020 Sep;31(9):3400-3413. doi: 10.1109/TNNLS.2019.2944481. Epub 2019 Nov 1.
9
A new federated learning-based wireless communication and client scheduling solution for combating COVID-19.一种基于联邦学习的抗击新冠肺炎的无线通信与客户端调度新解决方案。
Comput Commun. 2023 Jun 1;206:101-109. doi: 10.1016/j.comcom.2023.04.023. Epub 2023 May 6.
10
MedQ: Lossless ultra-low-bit neural network quantization for medical image segmentation.MedQ:用于医学图像分割的无损超低比特神经网络量化。
Med Image Anal. 2021 Oct;73:102200. doi: 10.1016/j.media.2021.102200. Epub 2021 Aug 2.

引用本文的文献

1
Spatial interpolation of global DEM using federated deep learning.使用联邦深度学习对全球数字高程模型进行空间插值。
Sci Rep. 2024 Sep 27;14(1):22089. doi: 10.1038/s41598-024-72807-z.