Lee Sangyoon, Choi Dae-Hyun
School of Electrical and Electronics Engineering, Chung-Ang University, 84 Heukseok-ro, Dongjak-gu, Seoul 156-756, Korea.
Sensors (Basel). 2020 Apr 10;20(7):2157. doi: 10.3390/s20072157.
This paper presents a hierarchical deep reinforcement learning (DRL) method for the scheduling of energy consumptions of smart home appliances and distributed energy resources (DERs) including an energy storage system (ESS) and an electric vehicle (EV). Compared to Q-learning algorithms based on a discrete action space, the novelty of the proposed approach is that the energy consumptions of home appliances and DERs are scheduled in a continuous action space using an actor-critic-based DRL method. To this end, a two-level DRL framework is proposed where home appliances are scheduled at the first level according to the consumer's preferred appliance scheduling and comfort level, while the charging and discharging schedules of ESS and EV are calculated at the second level using the optimal solution from the first level along with the consumer environmental characteristics. A simulation study is performed in a single home with an air conditioner, a washing machine, a rooftop solar photovoltaic system, an ESS, and an EV under a time-of-use pricing. Numerical examples under different weather conditions, weekday/weekend, and driving patterns of the EV confirm the effectiveness of the proposed approach in terms of total cost of electricity, state of energy of the ESS and EV, and consumer preference.
本文提出了一种用于智能家居电器和分布式能源资源(DER)能耗调度的分层深度强化学习(DRL)方法,其中分布式能源资源包括储能系统(ESS)和电动汽车(EV)。与基于离散动作空间的Q学习算法相比,该方法的新颖之处在于,使用基于演员-评论家的DRL方法在连续动作空间中对家用电器和DER的能耗进行调度。为此,提出了一个两级DRL框架,其中,根据消费者偏好的电器调度和舒适度在家用电器的第一级进行调度,而ESS和EV的充电和放电调度则在第二级使用来自第一级的最优解以及消费者环境特征进行计算。在一个使用分时电价的单户家庭中,对一台空调、一台洗衣机、一个屋顶太阳能光伏系统、一个ESS和一辆EV进行了仿真研究。不同天气条件、工作日/周末以及EV驾驶模式下的数值示例证实了该方法在电费总成本、ESS和EV的能量状态以及消费者偏好方面的有效性。