Department of Information Engineering, Suzhou University, Suzhou, Anhui 234000, China.
Comput Intell Neurosci. 2022 May 14;2022:6174708. doi: 10.1155/2022/6174708. eCollection 2022.
Aiming at the problem that computing power and resources of Mobile Edge Computing (MEC) servers are difficult to process long-period intensive task data, this study proposes a 5G converged network resource allocation strategy based on reinforcement learning in edge cloud computing environment. order to solve the problem of insufficient local computing power, the proposed strategy offloads some tasks to the edge of network. Firstly, we build a multi-MEC server and multi-user mobile edge system, and design optimization objectives to minimize the average response time of system tasks and total energy consumption. Then, task offloading and resource allocation process is modeled as Markov decision process. Furthermore, the deep Q-network is used to find the optimal resource allocation scheme. Finally, the proposed strategy is analyzed experimentally based on TensorFlow learning framework. Experimental results show that when the number of users is 110, final energy consumption is about 2500 J, which effectively reduces task delay and improves the utilization of resources.
针对移动边缘计算 (MEC) 服务器的计算能力和资源难以处理长时间密集任务数据的问题,本研究提出了一种基于强化学习的边缘云计算环境下的 5G 融合网络资源分配策略。为了解决本地计算能力不足的问题,该策略将部分任务卸载到网络边缘。首先,我们构建了一个多 MEC 服务器和多用户移动边缘系统,并设计了优化目标,以最小化系统任务的平均响应时间和总能耗。然后,任务卸载和资源分配过程被建模为马尔可夫决策过程。此外,使用深度 Q 网络来寻找最优的资源分配方案。最后,基于 TensorFlow 学习框架对所提出的策略进行了实验分析。实验结果表明,当用户数量为 110 时,最终能耗约为 2500 J,有效地降低了任务延迟,提高了资源利用率。