IEEE Trans Neural Netw Learn Syst. 2018 Jun;29(6):2192-2203. doi: 10.1109/TNNLS.2018.2801880.
Microgrids incorporated with distributed generation (DG) units and energy storage (ES) devices are expected to play more and more important roles in the future power systems. Yet, achieving efficient distributed economic dispatch in microgrids is a challenging issue due to the randomness and nonlinear characteristics of DG units and loads. This paper proposes a cooperative reinforcement learning algorithm for distributed economic dispatch in microgrids. Utilizing the learning algorithm can avoid the difficulty of stochastic modeling and high computational complexity. In the cooperative reinforcement learning algorithm, the function approximation is leveraged to deal with the large and continuous state spaces. And a diffusion strategy is incorporated to coordinate the actions of DG units and ES devices. Based on the proposed algorithm, each node in microgrids only needs to communicate with its local neighbors, without relying on any centralized controllers. Algorithm convergence is analyzed, and simulations based on real-world meteorological and load data are conducted to validate the performance of the proposed algorithm.
微电网中结合分布式发电 (DG) 单元和储能 (ES) 设备,预计将在未来的电力系统中发挥越来越重要的作用。然而,由于 DG 单元和负载的随机性和非线性特性,实现微电网中的高效分布式经济调度是一个具有挑战性的问题。本文提出了一种用于微电网分布式经济调度的合作强化学习算法。利用学习算法可以避免随机建模的困难和高计算复杂度。在合作强化学习算法中,利用函数逼近来处理大的连续状态空间。并结合扩散策略来协调 DG 单元和 ES 设备的动作。基于所提出的算法,微电网中的每个节点仅需要与本地邻居进行通信,而无需依赖任何集中式控制器。分析了算法的收敛性,并基于实际气象和负荷数据进行了仿真,以验证所提出算法的性能。