Liu Tianyi, Luo Ruyu, Xu Fangmin, Fan Chaoqiong, Zhao Chenglin
International School, Beijing University of Posts and Telecommunications, Beijing, 100876, China.
School of Information and Telecommunication Engineering, Beijing University of Posts and Telecommunications, Beijing, 100876, China.
Sensors (Basel). 2020 Feb 12;20(4):973. doi: 10.3390/s20040973.
With the development of global urbanization, the Internet of Things (IoT) and smart cities are becoming hot research topics. As an emerging model, edge computing can play an important role in smart cities because of its low latency and good performance. IoT devices can reduce time consumption with the help of a mobile edge computing (MEC) server. However, if too many IoT devices simultaneously choose to offload the computation tasks to the MEC server via the limited wireless channel, it may lead to the channel congestion, thus increasing time overhead. Facing a large number of IoT devices in smart cities, the centralized resource allocation algorithm needs a lot of signaling exchange, resulting in low efficiency. To solve the problem, this paper studies the joint policy of communication and computing of IoT devices in edge computing through game theory, and proposes distributed Q-learning algorithms with two learning policies. Simulation results show that the algorithm can converge quickly with a balanced solution.
随着全球城市化的发展,物联网(IoT)和智慧城市正成为热门研究课题。作为一种新兴模式,边缘计算因其低延迟和良好性能,可在智慧城市中发挥重要作用。物联网设备借助移动边缘计算(MEC)服务器可减少时间消耗。然而,如果过多物联网设备同时通过有限无线信道选择将计算任务卸载到MEC服务器,可能导致信道拥塞,从而增加时间开销。面对智慧城市中的大量物联网设备,集中式资源分配算法需要大量信令交互,导致效率低下。为解决该问题,本文通过博弈论研究边缘计算中物联网设备的通信与计算联合策略,并提出具有两种学习策略的分布式Q学习算法。仿真结果表明,该算法能以平衡解快速收敛。