Suppr超能文献

基于动态参数融合与原型对齐的个性化联邦学习

Personalized Federated Learning Based on Dynamic Parameter Fusion and Prototype Alignment.

作者信息

Chen Ying, Wen Jing, Liang Shaoling, Chen Zhaofa, Huang Baohua

机构信息

School of Computer and Electronic Information, Guangxi University, Nanning 530004, China.

Guangxi Key Laboratory of Digital Infrastructure, Guangxi Zhuang Autonomous Region Information Center, Nanning 530000, China.

出版信息

Sensors (Basel). 2025 Aug 15;25(16):5076. doi: 10.3390/s25165076.

Abstract

To address the limitation of generalization of federated learning under non-independent and identically distributed (Non-IID) data, we propose FedDFPA, a personalized federated learning framework that integrates dynamic parameter fusion and prototype alignment. We design a class-wise dynamic parameter fusion mechanism that adaptively fuses global and local classifier parameters at the class level. It enables each client to preserve its reliable local knowledge while selectively incorporating beneficial global information for personalized classification. We introduce a prototype alignment mechanism based on both global and historical information. By aligning current local features with global prototypes and historical local prototypes, it improves cross-client semantic consistency and enhances the stability of local features. To evaluate the effectiveness of FedDFPA, we conduct extensive experiments on various Non-IID settings and client participation rates. Compared to the average performance of state-of-the-art algorithms, FedDFPA improves the average test accuracy by 3.59% and 4.71% under practical and pathological heterogeneous settings, respectively. These results confirm the effectiveness of our dual-mechanism design in achieving a better balance between personalization and collaboration in federated learning.

摘要

为了解决联邦学习在非独立同分布(Non-IID)数据下泛化能力的局限性,我们提出了FedDFPA,这是一个集成了动态参数融合和原型对齐的个性化联邦学习框架。我们设计了一种逐类动态参数融合机制,该机制在类级别自适应地融合全局和局部分类器参数。它使每个客户端能够保留其可靠的局部知识,同时有选择地纳入有益的全局信息以进行个性化分类。我们引入了一种基于全局和历史信息的原型对齐机制。通过将当前局部特征与全局原型和历史局部原型对齐,它提高了跨客户端语义一致性,并增强了局部特征的稳定性。为了评估FedDFPA的有效性,我们在各种非独立同分布设置和客户端参与率下进行了广泛的实验。与现有算法的平均性能相比,FedDFPA在实际和病理异构设置下分别将平均测试准确率提高了3.59%和4.71%。这些结果证实了我们的双机制设计在联邦学习中实现个性化与协作之间更好平衡的有效性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8964/12389858/93e5cd6bc046/sensors-25-05076-g001.jpg

相似文献

1
Personalized Federated Learning Based on Dynamic Parameter Fusion and Prototype Alignment.
Sensors (Basel). 2025 Aug 15;25(16):5076. doi: 10.3390/s25165076.
2
FedEmerge: An Entropy-Guided Federated Learning Method for Sensor Networks and Edge Intelligence.
Sensors (Basel). 2025 Jun 14;25(12):3728. doi: 10.3390/s25123728.
3
MolCFL: A personalized and privacy-preserving drug discovery framework based on generative clustered federated learning.
J Biomed Inform. 2024 Sep;157:104712. doi: 10.1016/j.jbi.2024.104712. Epub 2024 Aug 23.
5
FGDN: A Federated Graph Convolutional Network framework for multi-site major depression disorder diagnosis.
Comput Med Imaging Graph. 2025 Sep;124:102612. doi: 10.1016/j.compmedimag.2025.102612. Epub 2025 Aug 9.
6
SFPGCL: Specificity-preserving federated population graph contrastive learning for multi-site ASD identification using rs-fMRI data.
Comput Med Imaging Graph. 2025 Sep;124:102558. doi: 10.1016/j.compmedimag.2025.102558. Epub 2025 May 16.
7
Fed-HeLLo: Efficient Federated Foundation Model Fine-Tuning With Heterogeneous LoRA Allocation.
IEEE Trans Neural Netw Learn Syst. 2025 Jul 8;PP. doi: 10.1109/TNNLS.2025.3580495.
8
Knowledge-distillation based personalized federated learning with distribution constraints.
Neural Netw. 2025 Aug 6;193:107951. doi: 10.1016/j.neunet.2025.107951.
9
Sign-Entropy Regularization for Personalized Federated Learning.
Entropy (Basel). 2025 Jun 4;27(6):601. doi: 10.3390/e27060601.
10
The Diversity Bonus: Learning From Dissimilar Clients in Personalized Federated Learning.
IEEE Trans Neural Netw Learn Syst. 2025 Jul 23;PP. doi: 10.1109/TNNLS.2025.3585927.

本文引用的文献

1
Adapt to Adaptation: Learning Personalization for Cross-Silo Federated Learning.
IJCAI (U S). 2022 Jul;2022:2166-2173. doi: 10.24963/ijcai.2022/301.
2
Communication-efficient federated learning via knowledge distillation.
Nat Commun. 2022 Apr 19;13(1):2032. doi: 10.1038/s41467-022-29763-x.
3
Towards Personalized Federated Learning.
IEEE Trans Neural Netw Learn Syst. 2023 Dec;34(12):9587-9603. doi: 10.1109/TNNLS.2022.3160699. Epub 2023 Nov 30.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验