Suppr超能文献

PerHeFed:用于异构卷积神经网络的个性化联邦学习通用框架。

PerHeFed: A general framework of personalized federated learning for heterogeneous convolutional neural networks.

作者信息

Ma Le, Liao YuYing, Zhou Bin, Xi Wen

机构信息

Xi'an Institute of High Technology, Xi'An, China.

National University of Defense Technology, Changsha, China.

出版信息

World Wide Web. 2022 Dec 12:1-23. doi: 10.1007/s11280-022-01119-x.

Abstract

In conventional federated learning, each device is restricted to train a network model of the same structure. This greatly hinders the application of federated learning where the data and devices are quite heterogeneous because of their different hardware equipment and communication networks. At the same time, existing studies have shown that transmitting all of the model parameters not only has heavy communication costs, but also increases risk of privacy leakage. We propose a general framework for personalized federated learning (PerHeFed), which enables the devices to design their local model structures autonomously and share sub-models without structural restrictions. In PerHeFed, a simple-but-effective mapping relation and a novel personalized sub-model aggregation method are proposed for heterogeneous sub-models to be aggregated. By dividing the aggregations into two primitive types (i.e., inter-layer and intra-layer), PerHeFed is applicable to any combination of heterogeneous convolutional neural networks, and we believe that this can satisfy the personalized requirements of heterogeneous models. Experiments show that, compared to the state-of-the-art method (e.g., FLOP), in non-IID data sets our method compress ≈ 50 of the shared sub-model parameters with only a 4.38% drop in accuracy on SVHN dataset and on CIFAR-10, PerHeFed even achieves a 0.3% improvement in accuracy. To the best of our knowledge, our work is the first general personalized federated learning framework for heterogeneous convolutional networks, even cross different networks, addressing model structure unity in conventional federated learning.

摘要

在传统的联邦学习中,每个设备只能训练相同结构的网络模型。这极大地阻碍了联邦学习在数据和设备因硬件设备和通信网络不同而具有很大异质性的场景中的应用。同时,现有研究表明,传输所有模型参数不仅通信成本高昂,还会增加隐私泄露的风险。我们提出了一种个性化联邦学习(PerHeFed)的通用框架,该框架使设备能够自主设计本地模型结构,并在无结构限制的情况下共享子模型。在PerHeFed中,针对异构子模型的聚合提出了一种简单而有效的映射关系和一种新颖的个性化子模型聚合方法。通过将聚合分为两种基本类型(即层间和层内),PerHeFed适用于异构卷积神经网络的任何组合,我们相信这可以满足异构模型的个性化需求。实验表明,与现有最优方法(例如FLOP)相比,在非独立同分布数据集上,我们的方法在SVHN数据集上仅使共享子模型参数压缩约50%,准确率下降4.38%,而在CIFAR-10数据集上,PerHeFed甚至使准确率提高了0.3%。据我们所知,我们的工作是首个针对异构卷积网络,甚至是跨不同网络的通用个性化联邦学习框架,解决了传统联邦学习中的模型结构一致性问题。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3008/9743105/c23428898e1f/11280_2022_1119_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验