Yang An, Liu Ying
College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China.
Entropy (Basel). 2025 Jul 16;27(7):759. doi: 10.3390/e27070759.
Federated learning (FL), which enables collaborative learning across distributed nodes, confronts a significant heterogeneity challenge, primarily including resource heterogeneity induced by different hardware platforms, and statistical heterogeneity originating from non-IID private data distributions among clients. Neural architecture search (NAS), particularly one-shot NAS, holds great promise for automatically designing optimal personalized models tailored to such heterogeneous scenarios. However, the coexistence of both resource and statistical heterogeneity destabilizes the training of the one-shot supernet, impairs the evaluation of candidate architectures, and ultimately hinders the discovery of optimal personalized models. To address this problem, we propose a heterogeneity-aware personalized federated NAS (HAPFNAS) method. First, we leverage lightweight knowledge models to distill knowledge from clients to server-side supernet, thereby effectively mitigating the effects of heterogeneity and enhancing the training stability. Then, we build random-forest-based personalized performance predictors to enable the efficient evaluation of candidate architectures across clients. Furthermore, we develop a model-heterogeneous FL algorithm called heteroFedAvg to facilitate collaborative model training for the discovered personalized models. Comprehensive experiments on CIFAR-10/100 and Tiny-ImageNet classification datasets demonstrate the effectiveness of our HAPFNAS, compared to state-of-the-art federated NAS methods.
联邦学习(FL)能够在分布式节点之间进行协作学习,但面临着重大的异质性挑战,主要包括不同硬件平台引起的资源异质性,以及客户端之间非独立同分布(non-IID)私有数据分布导致的统计异质性。神经架构搜索(NAS),特别是一次性NAS,对于自动设计适用于此类异构场景的最优个性化模型具有很大潜力。然而,资源和统计异质性的共存破坏了一次性超网络的训练,损害了候选架构的评估,并最终阻碍了最优个性化模型的发现。为了解决这个问题,我们提出了一种感知异质性的个性化联邦NAS(HAPFNAS)方法。首先,我们利用轻量级知识模型将客户端的知识提炼到服务器端超网络,从而有效减轻异质性的影响并提高训练稳定性。然后,我们构建基于随机森林的个性化性能预测器,以便能够跨客户端高效评估候选架构。此外,我们开发了一种名为heteroFedAvg的模型异构联邦学习算法,以促进对发现的个性化模型进行协作式模型训练。在CIFAR-10/100和Tiny-ImageNet分类数据集上进行的综合实验表明,与最先进的联邦NAS方法相比,我们的HAPFNAS是有效的。