Jiang Yuang, Wang Shiqiang, Valls Victor, Ko Bong Jun, Lee Wei-Han, Leung Kin K, Tassiulas Leandros
IEEE Trans Neural Netw Learn Syst. 2023 Dec;34(12):10374-10386. doi: 10.1109/TNNLS.2022.3166101. Epub 2023 Nov 30.
Federated learning (FL) allows model training from local data collected by edge/mobile devices while preserving data privacy, which has wide applicability to image and vision applications. A challenge is that client devices in FL usually have much more limited computation and communication resources compared to servers in a data center. To overcome this challenge, we propose PruneFL -a novel FL approach with adaptive and distributed parameter pruning, which adapts the model size during FL to reduce both communication and computation overhead and minimize the overall training time, while maintaining a similar accuracy as the original model. PruneFL includes initial pruning at a selected client and further pruning as part of the FL process. The model size is adapted during this process, which includes maximizing the approximate empirical risk reduction divided by the time of one FL round. Our experiments with various datasets on edge devices (e.g., Raspberry Pi) show that: 1) we significantly reduce the training time compared to conventional FL and various other pruning-based methods and 2) the pruned model with automatically determined size converges to an accuracy that is very similar to the original model, and it is also a lottery ticket of the original model.
联邦学习(FL)允许在保护数据隐私的同时,根据边缘/移动设备收集的本地数据进行模型训练,这在图像和视觉应用中具有广泛的适用性。一个挑战是,与数据中心的服务器相比,联邦学习中的客户端设备通常具有更加有限的计算和通信资源。为了克服这一挑战,我们提出了PruneFL——一种具有自适应和分布式参数剪枝的新型联邦学习方法,它在联邦学习过程中调整模型大小,以减少通信和计算开销,并将整体训练时间降至最低,同时保持与原始模型相似的准确率。PruneFL包括在选定客户端进行初始剪枝,并在联邦学习过程中进行进一步剪枝。在此过程中调整模型大小,这包括最大化近似经验风险降低量除以一轮联邦学习的时间。我们在边缘设备(如树莓派)上使用各种数据集进行的实验表明:1)与传统联邦学习和其他各种基于剪枝的方法相比,我们显著减少了训练时间;2)自动确定大小的剪枝模型收敛到与原始模型非常相似的准确率,并且它也是原始模型的一张“中奖彩票”。