Yue Kai, Jin Richeng, Wong Chau-Wai, Dai Huaiyu
IEEE Trans Neural Netw Learn Syst. 2024 Jun;35(6):8215-8228. doi: 10.1109/TNNLS.2022.3225715. Epub 2024 Jun 3.
Federated learning allows collaborative clients to solve a machine-learning problem while preserving data privacy. Recent studies have tackled various challenges in federated learning, but the joint optimization of communication overhead, learning reliability, and deployment efficiency is still an open problem. To this end, we propose a new scheme named federated learning via plurality vote (FedVote). In each communication round of FedVote, clients transmit binary or ternary weights to the server with low communication overhead. The model parameters are aggregated via weighted voting to enhance the resilience against Byzantine attacks. When deployed for inference, the model with binary or ternary weights is resource-friendly to edge devices. Our results demonstrate that the proposed method can reduce quantization error and converges faster compared to the methods directly quantizing the model updates.
联邦学习允许协作客户端在保护数据隐私的同时解决机器学习问题。最近的研究已经解决了联邦学习中的各种挑战,但通信开销、学习可靠性和部署效率的联合优化仍然是一个未解决的问题。为此,我们提出了一种名为基于多数投票的联邦学习(FedVote)的新方案。在FedVote的每一轮通信中,客户端以低通信开销向服务器传输二进制或三进制权重。通过加权投票聚合模型参数,以增强抵御拜占庭攻击的能力。当用于推理部署时,具有二进制或三进制权重的模型对边缘设备具有资源友好性。我们的结果表明,与直接量化模型更新的方法相比,所提出的方法可以减少量化误差并更快收敛。