Department of Electronic Engineering, Sogang University, Seoul 04107, Republic of Korea.
Sensors (Basel). 2023 Jan 23;23(3):1295. doi: 10.3390/s23031295.
Recently, with the development of autonomous driving technology, vehicle-to-everything (V2X) communication technology that provides a wireless connection between vehicles, pedestrians, and roadside base stations has gained significant attention. Vehicle-to-vehicle (V2V) communication should provide low-latency and highly reliable services through direct communication between vehicles, improving safety. In particular, as the number of vehicles increases, efficient radio resource management becomes more important. In this paper, we propose a deep reinforcement learning (DRL)-based decentralized resource allocation scheme in the V2X communication network in which the radio resources are shared between the V2V and vehicle-to-infrastructure (V2I) networks. Here, a deep Q-network (DQN) is utilized to find the resource blocks and transmit power of vehicles in the V2V network to maximize the sum rate of the V2I and V2V links while reducing the power consumption and latency of V2V links. The DQN also uses the channel state information, the signal-to-interference-plus-noise ratio (SINR) of V2I and V2V links, and the latency constraints of vehicles to find the optimal resource allocation scheme. The proposed DQN-based resource allocation scheme ensures energy-efficient transmissions that satisfy the latency constraints for V2V links while reducing the interference of the V2V network to the V2I network. We evaluate the performance of the proposed scheme in terms of the sum rate of the V2X network, the average power consumption of V2V links, and the average outage probability of V2V links using a case study in Manhattan with nine blocks of 3GPP TR 36.885. The simulation results show that the proposed scheme greatly reduces the transmit power of V2V links when compared to the conventional reinforcement learning-based resource allocation scheme without sacrificing the sum rate of the V2X network or the outage probability of V2V links.
近年来,随着自动驾驶技术的发展,车对一切(V2X)通信技术为车辆、行人和路边基站之间提供了无线连接,引起了广泛关注。车对车(V2V)通信应通过车辆之间的直接通信提供低延迟和高可靠的服务,从而提高安全性。特别是随着车辆数量的增加,高效的无线电资源管理变得更加重要。在本文中,我们提出了一种基于深度强化学习(DRL)的 V2X 通信网络中的分散资源分配方案,其中 V2V 和车对基础设施(V2I)网络之间共享无线电资源。在这里,利用深度 Q 网络(DQN)来寻找 V2V 网络中车辆的资源块和发射功率,以最大化 V2I 和 V2V 链路的和速率,同时降低 V2V 链路的功率消耗和延迟。DQN 还利用信道状态信息、V2I 和 V2V 链路的信干噪比(SINR)以及车辆的延迟约束来找到最优的资源分配方案。所提出的基于 DQN 的资源分配方案确保了满足 V2V 链路延迟约束的节能传输,同时降低了 V2V 网络对 V2I 网络的干扰。我们通过在曼哈顿的九个 3GPP TR 36.885 块的案例研究,从 V2X 网络的和速率、V2V 链路的平均功率消耗和 V2V 链路的平均中断概率三个方面评估了所提出方案的性能。仿真结果表明,与传统的基于强化学习的资源分配方案相比,所提出的方案在不牺牲 V2X 网络的和速率或 V2V 链路的中断概率的情况下,大大降低了 V2V 链路的发射功率。