Zhuang Wei, Xing Fanan, Lu Yuhang
School of Computer Science, Nanjing University of Information Science and Technology, Nanjing 210044, China.
Sensors (Basel). 2024 Mar 24;24(7):2070. doi: 10.3390/s24072070.
With the ongoing advancement of electric power Internet of Things (IoT), traditional power inspection methods face challenges such as low efficiency and high risk. Unmanned aerial vehicles (UAVs) have emerged as a more efficient solution for inspecting power facilities due to their high maneuverability, excellent line-of-sight communication capabilities, and strong adaptability. However, UAVs typically grapple with limited computational power and energy resources, which constrain their effectiveness in handling computationally intensive and latency-sensitive inspection tasks. In response to this issue, we propose a UAV task offloading strategy based on deep reinforcement learning (DRL), which is designed for power inspection scenarios consisting of mobile edge computing (MEC) servers and multiple UAVs. Firstly, we propose an innovative UAV-Edge server collaborative computing architecture to fully exploit the mobility of UAVs and the high-performance computing capabilities of MEC servers. Secondly, we established a computational model concerning energy consumption and task processing latency in the UAV power inspection system, enhancing our understanding of the trade-offs involved in UAV offloading strategies. Finally, we formalize the task offloading problem as a multi-objective optimization issue and simultaneously model it as a Markov Decision Process (MDP). Subsequently, we proposed a task offloading algorithm based on a Deep Deterministic Policy Gradient (OTDDPG) to obtain the optimal task offloading strategy for UAVs. The simulation results demonstrated that this approach outperforms baseline methods with significant improvements in task processing latency and energy consumption.
随着电力物联网(IoT)的不断发展,传统的电力巡检方法面临着效率低下和风险高等挑战。由于无人机具有高机动性、出色的视距通信能力和强大的适应性,已成为一种更高效的电力设施巡检解决方案。然而,无人机通常面临计算能力和能源资源有限的问题,这限制了它们在处理计算密集型和对延迟敏感的巡检任务中的有效性。针对这一问题,我们提出了一种基于深度强化学习(DRL)的无人机任务卸载策略,该策略专为由移动边缘计算(MEC)服务器和多架无人机组成的电力巡检场景而设计。首先,我们提出了一种创新的无人机-边缘服务器协同计算架构,以充分利用无人机的机动性和MEC服务器的高性能计算能力。其次,我们建立了一个关于无人机电力巡检系统中能量消耗和任务处理延迟的计算模型,加深了我们对无人机卸载策略中权衡因素的理解。最后,我们将任务卸载问题形式化为一个多目标优化问题,并同时将其建模为马尔可夫决策过程(MDP)。随后,我们提出了一种基于深度确定性策略梯度(OTDDPG)的任务卸载算法,以获得无人机的最优任务卸载策略。仿真结果表明,该方法在任务处理延迟和能量消耗方面有显著改善,优于基线方法。