Suppr超能文献

FedRAD:通过关系自适应蒸馏实现的异构联邦学习

FedRAD: Heterogeneous Federated Learning via Relational Adaptive Distillation.

作者信息

Tang Jianwu, Ding Xuefeng, Hu Dasha, Guo Bing, Shen Yuncheng, Ma Pan, Jiang Yuming

机构信息

College of Computer Science, Sichuan University, Chengdu 610065, China.

Big Data Analysis and Fusion Application Technology Engineering Laboratory of Sichuan Province, Chengdu 610065, China.

出版信息

Sensors (Basel). 2023 Jul 19;23(14):6518. doi: 10.3390/s23146518.

Abstract

As the development of the Internet of Things (IoT) continues, Federated Learning (FL) is gaining popularity as a distributed machine learning framework that does not compromise the data privacy of each participant. However, the data held by enterprises and factories in the IoT often have different distribution properties (Non-IID), leading to poor results in their federated learning. This problem causes clients to forget about global knowledge during their local training phase and then tends to slow convergence and degrades accuracy. In this work, we propose a method named FedRAD, which is based on relational knowledge distillation that further enhances the mining of high-quality global knowledge by local models from a higher-dimensional perspective during their local training phase to better retain global knowledge and avoid forgetting. At the same time, we devise an entropy-wise adaptive weights module (EWAW) to better regulate the proportion of loss in single-sample knowledge distillation versus relational knowledge distillation so that students can weigh losses based on predicted entropy and learn global knowledge more effectively. A series of experiments on CIFAR10 and CIFAR100 show that FedRAD has better performance in terms of convergence speed and classification accuracy compared to other advanced FL methods.

摘要

随着物联网(IoT)的不断发展,联邦学习(FL)作为一种不损害每个参与者数据隐私的分布式机器学习框架越来越受欢迎。然而,物联网中企业和工厂持有的数据往往具有不同的分布特性(非独立同分布),导致它们在联邦学习中的效果不佳。这个问题导致客户端在本地训练阶段忘记全局知识,进而往往会减缓收敛速度并降低准确性。在这项工作中,我们提出了一种名为FedRAD的方法,该方法基于关系知识蒸馏,在本地训练阶段从更高维度的角度进一步增强本地模型对高质量全局知识的挖掘,以更好地保留全局知识并避免遗忘。同时,我们设计了一个基于熵的自适应权重模块(EWAW),以更好地调节单样本知识蒸馏与关系知识蒸馏中损失的比例,以便学生能够根据预测熵权衡损失并更有效地学习全局知识。在CIFAR10和CIFAR100上进行的一系列实验表明,与其他先进的联邦学习方法相比,FedRAD在收敛速度和分类准确性方面具有更好的性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d1c1/10385861/ec0de4e38bd1/sensors-23-06518-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验