Agency for Defense Development, Daejeon 34186, Korea.
Department of Computer Science and Engineering, Chungnam National University, Daejeon 34134, Korea.
Sensors (Basel). 2020 Oct 5;20(19):5685. doi: 10.3390/s20195685.
Although various unmanned aerial vehicle (UAV)-assisted routing protocols have been proposed for vehicular ad hoc networks, few studies have investigated load balancing algorithms to accommodate future traffic growth and deal with complex dynamic network environments simultaneously. In particular, owing to the extended coverage and clear line-of-sight relay link on a UAV relay node (URN), the possibility of a bottleneck link is high. To prevent problems caused by traffic congestion, we propose Q-learning based load balancing routing (Q-LBR) through a combination of three key techniques, namely, a low-overhead technique for estimating the network load through the queue status obtained from each ground vehicular node by the URN, a load balancing scheme based on Q-learning and a reward control function for rapid convergence of Q-learning. Through diverse simulations, we demonstrate that Q-LBR improves the packet delivery ratio, network utilization and latency by more than 8, 28 and 30%, respectively, compared to the existing protocol.
虽然已经提出了各种用于车联网的无人机 (UAV) 辅助路由协议,但很少有研究探讨负载均衡算法来同时适应未来的交通增长和处理复杂的动态网络环境。特别是,由于 UAV 中继节点 (URN) 的扩展覆盖范围和清晰的视线路由链路,瓶颈链路的可能性很高。为了防止由于流量拥塞而导致的问题,我们通过结合三种关键技术,即通过 URN 从每个地面车辆节点获取的队列状态来估算网络负载的低开销技术、基于 Q-学习的负载均衡方案和用于快速收敛 Q-学习的奖励控制功能,提出了基于 Q-学习的负载均衡路由 (Q-LBR)。通过多种仿真,我们证明与现有协议相比,Q-LBR 可将分组投递率、网络利用率和延迟分别提高 8%、28%和 30%以上。