Lee Jihun, Kim Hun, So Jaewoo
Department of Electronic Engineering, Sogang University, Seoul 04107, Republic of Korea.
Sensors (Basel). 2024 Jan 27;24(3):837. doi: 10.3390/s24030837.
The directional antenna combined with beamforming is one of the attractive solutions to accommodate high data rate applications in 5G vehicle communications. However, the directional nature of beamforming requires beam alignment between the transmitter and the receiver, which incurs significant signaling overhead. Hence, we need to find the optimal parameters for directional beamforming, i.e., the antenna beamwidth and beam alignment interval, that maximize the throughput, taking the beam alignment overhead into consideration. In this paper, we propose a reinforcement learning (RL)-based beamforming scheme in a vehicle-to-infrastructure system, where we jointly determine the antenna beamwidth and the beam alignment interval, taking into account the past and future rewards. The simulation results show that the proposed RL-based joint beamforming scheme outperforms conventional beamforming schemes in terms of the average throughput and the average link stability ratio.
结合波束赋形的定向天线是在5G车辆通信中适应高数据速率应用的有吸引力的解决方案之一。然而,波束赋形的定向特性要求发射机和接收机之间进行波束对准,这会产生大量的信令开销。因此,我们需要找到定向波束赋形的最优参数,即天线波束宽度和波束对准间隔,在考虑波束对准开销的情况下最大化吞吐量。在本文中,我们提出了一种基于强化学习(RL)的车辆到基础设施系统中的波束赋形方案,在该方案中,我们综合考虑过去和未来的奖励,联合确定天线波束宽度和波束对准间隔。仿真结果表明,所提出的基于RL的联合波束赋形方案在平均吞吐量和平均链路稳定性方面优于传统波束赋形方案。