Wang Xiumin, Li Lei, Li Jun, Li Zhengquan
College of Information Engineering, China Jiliang University, Hangzhou 310018, China.
Binjiang College, Nanjing University of Information Science & Technology, Wuxi 214105, China.
Entropy (Basel). 2020 Aug 30;22(9):957. doi: 10.3390/e22090957.
In order to maximize energy efficiency in heterogeneous networks (HetNets), a turbo Q-Learning (TQL) combined with multistage decision process and tabular Q-Learning is proposed to optimize the resource configuration. For the large dimensions of action space, the problem of energy efficiency optimization is designed as a multistage decision process in this paper, according to the resource allocation of optimization objectives, the initial problem is divided into several subproblems which are solved by tabular Q-Learning, and the traditional exponential increasing size of action space is decomposed into linear increase. By iterating the solutions of subproblems, the initial problem is solved. The simple stability analysis of the algorithm is given in this paper. As to the large dimension of state space, we use a deep neural network (DNN) to classify states where the optimization policy of novel Q-Learning is set to label samples. Thus far, the dimensions of action and state space have been solved. The simulation results show that our approach is convergent, improves the convergence speed by 60% while maintaining almost the same energy efficiency and having the characteristics of system adjustment.
为了在异构网络(HetNets)中实现能源效率最大化,提出了一种结合多阶段决策过程和表格Q学习的涡轮增压Q学习(TQL)方法来优化资源配置。针对动作空间维度较大的问题,本文将能源效率优化问题设计为一个多阶段决策过程,根据优化目标的资源分配,将初始问题分解为若干子问题,通过表格Q学习求解,将传统动作空间指数增长的规模分解为线性增长。通过迭代子问题的解来求解初始问题。本文给出了该算法的简单稳定性分析。针对状态空间维度较大的问题,我们使用深度神经网络(DNN)对状态进行分类,其中新颖Q学习的优化策略被设置为标记样本。至此,动作和状态空间的维度问题得到了解决。仿真结果表明,我们的方法是收敛的,在保持几乎相同能源效率并具有系统调整特性的同时,收敛速度提高了60%。