• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

在低秩张量空间中学习用于异构联邦学习的全局提示。

Learn the global prompt in the low-rank tensor space for heterogeneous federated learning.

作者信息

Fu Lele, Huang Sheng, Li Yuecheng, Chen Chuan, Zhang Chuanfu, Zheng Zibin

机构信息

School of Systems Science and Engineering, Sun Yat-sen University, Guangzhou, China.

School of Systems Science and Engineering, Sun Yat-sen University, Guangzhou, China.

出版信息

Neural Netw. 2025 Jul;187:107319. doi: 10.1016/j.neunet.2025.107319. Epub 2025 Mar 5.

DOI:10.1016/j.neunet.2025.107319
PMID:40058178
Abstract

Federated learning collaborates with multiple clients to train a global model, enhancing the model generalization while allowing the local data transmission-free and security. However, federated learning currently faces three intractable challenges: (1) The large number of model parameters result in an excessive communication burden. (2) The non-independently and identically distributed local data induces the degradation of global model. (3) The model heterogeneity renders traditional federated aggregation infeasible. To dissipate the three difficulties, we propose to learn the global prompt in the low-rank tensor space (FedGPT) for heterogeneous federated learning. Specifically, we employ the prompts rather than the model parameters as the carrier of local knowledge to achieve the information interaction between multiple clients. Since the prompts only have a very small number of variables, the communication volume is greatly reduced. To cope with the data heterogeneity, the prompts from different clients are stacked into the third-order tensors, on which the tensor singular value decomposition is performed to extract the global information. Furthermore, the proposed FedGPT possesses the ability to handle the model heterogeneity, the local models of different sizes can transfer the knowledge with the help of the prompts to improve the performance. Extensive experiments on three real-world datasets are conducted. Overall, FedGPT outperforms other state-of-the-art compared methods by up to 13.21%, and achieves less than 3% of communication volume of FedAvg, demonstrating the superiority of the proposed FedGPT.

摘要

联邦学习与多个客户端协作训练全局模型,在保证本地数据无传输且安全的同时提高模型的泛化能力。然而,联邦学习目前面临三个棘手的挑战:(1)大量的模型参数导致通信负担过重。(2)非独立同分布的本地数据会导致全局模型性能下降。(3)模型的异质性使得传统的联邦聚合方法不可行。为了解决这三个难题,我们提出在低秩张量空间中学习全局提示(FedGPT)用于异构联邦学习。具体而言,我们采用提示而非模型参数作为本地知识的载体,以实现多个客户端之间的信息交互。由于提示仅包含非常少量的变量,通信量大大减少。为了应对数据异质性,将来自不同客户端的提示堆叠成三阶张量,并对其进行张量奇异值分解以提取全局信息。此外,所提出的FedGPT具有处理模型异质性的能力,不同大小的本地模型可以借助提示传递知识以提高性能。我们在三个真实世界数据集上进行了广泛的实验。总体而言,FedGPT比其他最先进的比较方法性能提升高达13.21%,并且通信量不到FedAvg的3%,证明了所提出的FedGPT的优越性。

相似文献

1
Learn the global prompt in the low-rank tensor space for heterogeneous federated learning.在低秩张量空间中学习用于异构联邦学习的全局提示。
Neural Netw. 2025 Jul;187:107319. doi: 10.1016/j.neunet.2025.107319. Epub 2025 Mar 5.
2
Federated learning using model projection for multi-center disease diagnosis with non-IID data.使用模型投影的联邦学习用于非独立同分布数据的多中心疾病诊断
Neural Netw. 2024 Oct;178:106409. doi: 10.1016/j.neunet.2024.106409. Epub 2024 May 24.
3
FedART: A neural model integrating federated learning and adaptive resonance theory.FedART:一种融合联邦学习与自适应共振理论的神经模型。
Neural Netw. 2025 Jan;181:106845. doi: 10.1016/j.neunet.2024.106845. Epub 2024 Nov 4.
4
StoCFL: A stochastically clustered federated learning framework for Non-IID data with dynamic client participation.StoCFL:一种用于具有动态客户端参与的非独立同分布数据的随机聚类联邦学习框架。
Neural Netw. 2025 Jul;187:107278. doi: 10.1016/j.neunet.2025.107278. Epub 2025 Feb 22.
5
FedBM: Stealing knowledge from pre-trained language models for heterogeneous federated learning.联邦基于知识迁移的模型:从预训练语言模型中窃取知识用于异构联邦学习。
Med Image Anal. 2025 May;102:103524. doi: 10.1016/j.media.2025.103524. Epub 2025 Mar 7.
6
Data-free knowledge distillation via generator-free data generation for Non-IID federated learning.通过无生成器的数据生成实现非独立同分布联邦学习的数据自由知识蒸馏。
Neural Netw. 2024 Nov;179:106627. doi: 10.1016/j.neunet.2024.106627. Epub 2024 Aug 10.
7
Self-attention fusion and adaptive continual updating for multimodal federated learning with heterogeneous data.用于异构数据的多模态联邦学习的自注意力融合与自适应持续更新
Neural Netw. 2025 Jul;187:107345. doi: 10.1016/j.neunet.2025.107345. Epub 2025 Mar 12.
8
Ternary Compression for Communication-Efficient Federated Learning.用于通信高效联邦学习的三元压缩
IEEE Trans Neural Netw Learn Syst. 2022 Mar;33(3):1162-1176. doi: 10.1109/TNNLS.2020.3041185. Epub 2022 Feb 28.
9
Federated learning with workload-aware client scheduling in heterogeneous systems.异构系统中具有工作负载感知的客户端调度的联邦学习。
Neural Netw. 2022 Oct;154:560-573. doi: 10.1016/j.neunet.2022.07.030. Epub 2022 Aug 1.
10
FedTP: Federated Learning by Transformer Personalization.FedTP:基于Transformer个性化的联邦学习
IEEE Trans Neural Netw Learn Syst. 2024 Oct;35(10):13426-13440. doi: 10.1109/TNNLS.2023.3269062. Epub 2024 Oct 7.