Xu Zhiang, Guo Yijia, Chakraborty Chinmay, Hua Qiaozhi, Chen Shengbo, Yu Keping
IEEE J Biomed Health Inform. 2023 Feb;27(2):652-663. doi: 10.1109/JBHI.2022.3187471. Epub 2023 Feb 3.
Nowadays, Federated Learning (FL) over Internet of Medical Things (IoMT) devices has become a current research hotspot. As a new architecture, FL can well protect the data privacy of IoMT devices, but the security of neural network model transmission can not be guaranteed. On the other hand, the sizes of current popular neural network models are usually relatively extensive, and how to deploy them on the IoMT devices has become a challenge. One promising approach to these problems is to reduce the network scale by quantizing the parameters of the neural networks, which can greatly improve the security of data transmission and reduce the transmission cost. In the previous literature, the fixed-point quantizer with stochastic rounding has been shown to have better performance than other quantization methods. However, how to design such quantizer to achieve the minimum square quantization error is still unknown. In addition, how to apply this quantizer in the FL framework also needs investigation. To address these questions, in this paper, we propose FedMSQE - Federated Learning with Minimum Square Quantization Error, that achieves the smallest quantization error for each individual client in the FL setting. Through numerical experiments in both single-node and FL scenarios, we prove that our proposed algorithm can achieve higher accuracy and lower quantization error than other quantization methods.
如今,基于医疗物联网(IoMT)设备的联邦学习(FL)已成为当前的研究热点。作为一种新架构,联邦学习能够很好地保护医疗物联网设备的数据隐私,但神经网络模型传输的安全性却无法得到保证。另一方面,当前流行的神经网络模型规模通常相对较大,如何将它们部署在医疗物联网设备上成为了一项挑战。解决这些问题的一种有前景的方法是通过量化神经网络参数来缩小网络规模,这可以极大地提高数据传输的安全性并降低传输成本。在以往文献中,具有随机舍入的定点量化器已被证明比其他量化方法具有更好的性能。然而,如何设计这种量化器以实现最小均方量化误差仍然未知。此外,如何在联邦学习框架中应用这种量化器也有待研究。为了解决这些问题,在本文中,我们提出了FedMSQE——具有最小均方量化误差的联邦学习,它在联邦学习设置中为每个客户端实现了最小的量化误差。通过在单节点和联邦学习场景中的数值实验,我们证明了我们提出的算法比其他量化方法能够实现更高的精度和更低的量化误差。