Key Laboratory of Networked Control Systems, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China.
Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110016, China.
Sensors (Basel). 2023 Feb 2;23(3):1618. doi: 10.3390/s23031618.
The rapid development of electric vehicle (EV) technology and the consequent charging demand have brought challenges to the stable operation of distribution networks (DNs). The problem of the collaborative optimization of the charging scheduling of EVs and voltage control of the DN is intractable because the uncertainties of both EVs and the DN need to be considered. In this paper, we propose a deep reinforcement learning (DRL) approach to coordinate EV charging scheduling and distribution network voltage control. The DRL-based strategy contains two layers, the upper layer aims to reduce the operating costs of power generation of distributed generators and power consumption of EVs, and the lower layer controls the Volt/Var devices to maintain the voltage stability of the distribution network. We model the coordinate EV charging scheduling and voltage control problem in the distribution network as a Markov decision process (MDP). The model considers uncertainties of charging process caused by the charging behavior of EV users, as well as the uncertainty of uncontrollable load, system dynamic electricity price and renewable energy generation. Since the model has a dynamic state space and mixed action outputs, a framework of deep deterministic policy gradient (DDPG) is adopted to train the two-layer agent and the policy network is designed to output discrete and continuous control actions. Simulation and numerical results on the IEEE-33 bus test system demonstrate the effectiveness of the proposed method in collaborative EV charging scheduling and distribution network voltage stabilization.
电动汽车(EV)技术的快速发展和随之而来的充电需求给配电网(DN)的稳定运行带来了挑战。由于需要考虑电动汽车和 DN 的不确定性,因此电动汽车充电调度和 DN 电压控制的协同优化问题是棘手的。在本文中,我们提出了一种基于深度强化学习(DRL)的方法来协调电动汽车充电调度和配电网电压控制。基于 DRL 的策略包含两层,上层旨在降低分布式发电机发电成本和电动汽车的能耗,下层控制电压/无功设备以维持配电网的电压稳定性。我们将配电网中协调的电动汽车充电调度和电压控制问题建模为一个马尔可夫决策过程(MDP)。该模型考虑了由电动汽车用户的充电行为引起的充电过程的不确定性,以及不可控负载、系统动态电价和可再生能源发电的不确定性。由于模型具有动态状态空间和混合动作输出,因此采用深度确定性策略梯度(DDPG)框架来训练两层代理,并且设计策略网络来输出离散和连续的控制动作。在 IEEE-33 母线测试系统上的仿真和数值结果表明了所提出的方法在协同电动汽车充电调度和配电网电压稳定化方面的有效性。