Signal Theory and Communications Department, Universitat Politècnica de Catalunya (UPC), 08034 Barcelona, Spain.
Sensors (Basel). 2022 Aug 18;22(16):6179. doi: 10.3390/s22166179.
The use of multi-connectivity has become a useful tool to manage the traffic in heterogeneous cellular network deployments, since it allows a device to be simultaneously connected to multiple cells. The proper exploitation of this technique requires to adequately configure the traffic sent through each cell depending on the experienced conditions. This motivates this work, which tackles the problem of how to optimally split the traffic among the cells when the multi-connectivity feature is used. To this end, the paper proposes the use of a deep reinforcement learning solution based on a Deep Q-Network (DQN) in order to determine the amount of traffic of a device that needs to be delivered through each cell, making the decision as a function of the current traffic and radio conditions. The obtained results show a near-optimal performance of the DQN-based solution with an average difference of only 3.9% in terms of reward with respect to the optimum strategy. Moreover, the solution clearly outperforms a reference scheme based on Signal to Interference Noise Ratio (SINR) with differences of up to 50% in terms of reward and up to 166% in terms of throughput for certain situations. Overall, the presented results show the promising performance of the DQN-based approach that establishes a basis for further research in the topic of multi-connectivity and for the application of this type of techniques in other problems of the radio access network.
多连接的使用已经成为管理异构蜂窝网络部署中流量的有用工具,因为它允许设备同时连接到多个小区。要充分利用这项技术,需要根据所经历的条件适当地配置通过每个小区发送的流量。这激发了这项工作,其解决了当使用多连接功能时如何在小区之间最优地分配流量的问题。为此,本文提出使用基于深度 Q 网络(DQN)的深度强化学习解决方案,以便根据当前流量和无线电条件来确定需要通过每个小区传输的设备流量。获得的结果表明,基于 DQN 的解决方案具有接近最优的性能,其奖励平均仅比最优策略低 3.9%。此外,该解决方案明显优于基于信号干扰噪声比(SINR)的参考方案,在奖励方面的差异高达 50%,在吞吐量方面的差异高达 166%,对于某些情况。总体而言,所呈现的结果表明了基于 DQN 的方法的有前途的性能,为多连接性主题的进一步研究以及这种类型的技术在无线电接入网络的其他问题中的应用奠定了基础。