Suppr超能文献

基于无线回程的联邦学习中具有自适应功率分配的高效梯度更新策略

Efficient Gradient Updating Strategies with Adaptive Power Allocation for Federated Learning over Wireless Backhaul.

作者信息

Yang Yunji, Hong Yonggi, Park Jaehyun

机构信息

Division of Smart Robot Convergence and Application Engineering, Department of Electronic Engineering, Pukyong National University, Busan 48513, Korea.

出版信息

Sensors (Basel). 2021 Oct 13;21(20):6791. doi: 10.3390/s21206791.

Abstract

In this paper, efficient gradient updating strategies are developed for the federated learning when distributed clients are connected to the server via a wireless backhaul link. Specifically, a common convolutional neural network (CNN) module is shared for all the distributed clients and it is trained through the federated learning over wireless backhaul connected to the main server. However, during the training phase, local gradients need to be transferred from multiple clients to the server over wireless backhaul link and can be distorted due to wireless channel fading. To overcome it, an efficient gradient updating method is proposed, in which the gradients are combined such that the effective SNR is maximized at the server. In addition, when the backhaul links for all clients have small channel gain simultaneously, the server may have severely distorted gradient vectors. Accordingly, we also propose a binary gradient updating strategy based on thresholding in which the round associated with all channels having small channel gains is excluded from federated learning. Because each client has limited transmission power, it is effective to allocate more power on the channel slots carrying specific important information, rather than allocating power equally to all channel resources (equivalently, slots). Accordingly, we also propose an adaptive power allocation method, in which each client allocates its transmit power proportionally to the magnitude of the gradient information. This is because, when training a deep learning model, the gradient elements with large values imply the large change of weight to decrease the loss function.

摘要

在本文中,当分布式客户端通过无线回程链路连接到服务器时,为联邦学习开发了高效的梯度更新策略。具体而言,为所有分布式客户端共享一个通用的卷积神经网络(CNN)模块,并通过连接到主服务器的无线回程上的联邦学习对其进行训练。然而,在训练阶段,局部梯度需要通过无线回程链路从多个客户端传输到服务器,并且可能由于无线信道衰落而失真。为了克服这一问题,提出了一种高效的梯度更新方法,其中对梯度进行组合,以使服务器处的有效信噪比最大化。此外,当所有客户端的回程链路同时具有较小的信道增益时,服务器可能会有严重失真的梯度向量。因此,我们还提出了一种基于阈值的二元梯度更新策略,其中将与所有具有小信道增益的信道相关的轮次排除在联邦学习之外。由于每个客户端的传输功率有限,在携带特定重要信息的信道时隙上分配更多功率,而不是将功率平均分配给所有信道资源(等效地,时隙)是有效的。因此,我们还提出了一种自适应功率分配方法,其中每个客户端根据梯度信息的大小按比例分配其发射功率。这是因为,在训练深度学习模型时,值较大的梯度元素意味着权重的较大变化以减小损失函数。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/91b8/8537050/54b51a1d182b/sensors-21-06791-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验