Li Mengshi, Yang Dongyan, Xu Yuhan, Ji Tianyao
School of Electric Power Engineering, South China University of Technology, 510000, Guangzhou, China.
Heliyon. 2024 Jul 8;10(14):e33944. doi: 10.1016/j.heliyon.2024.e33944. eCollection 2024 Jul 30.
It is challenging to accurately model the overall uncertainty of the power system when it is connected to large-scale intermittent generation sources such as wind and photovoltaic generation due to the inherent volatility, uncertainty, and indivisibility of renewable energy. Deep reinforcement learning (DRL) algorithms are introduced as a solution to avoid modeling the complex uncertainties and to adapt the fluctuation of uncertainty by interacting with the environment and using feedback to continuously improve their strategies. However, the large-scale nature and uncertainty of the system lead to the sparse reward problem and high-dimensional space issue in DRL. A hierarchical deep reinforcement learning (HDRL) scheme is designed to decompose the process of solving this problem into two stages, using the reinforcement learning (RL) agent in the global stage and the heuristic algorithm in the local stage to find optimal dispatching decisions for power systems under uncertainty. Simulation studies have shown that the proposed HDRL scheme is efficient in solving power system economic dispatch problems under both deterministic and uncertain scenarios thanks to its adaptation system uncertainty, and coping with the volatility of uncertain factors while significantly improving the speed of online decision-making.
由于可再生能源具有固有的波动性、不确定性和不可分割性,当电力系统接入大规模间歇性发电电源(如风力发电和光伏发电)时,准确模拟电力系统的整体不确定性具有挑战性。引入深度强化学习(DRL)算法作为一种解决方案,以避免对复杂的不确定性进行建模,并通过与环境交互并利用反馈不断改进其策略来适应不确定性的波动。然而,系统的大规模性质和不确定性导致了DRL中的稀疏奖励问题和高维空间问题。设计了一种分层深度强化学习(HDRL)方案,将解决此问题的过程分解为两个阶段,在全局阶段使用强化学习(RL)智能体,在局部阶段使用启发式算法,以找到不确定性下电力系统的最优调度决策。仿真研究表明,所提出的HDRL方案由于能够适应系统不确定性,并应对不确定因素的波动性,同时显著提高在线决策速度,因此在解决确定性和不确定性场景下的电力系统经济调度问题方面是有效的。