Suppr超能文献

基于深度强化学习的电力系统弹性评估。

Resiliency Assessment of Power Systems Using Deep Reinforcement Learning.

机构信息

Department of Mechatronics Engineering, German Jordanian University, Amman 11180, Jordan.

Department of Natural Science & Industrial Engineering, Deggendorf Institute of Technology, Deggendorf 94469, Germany.

出版信息

Comput Intell Neurosci. 2022 Apr 7;2022:2017366. doi: 10.1155/2022/2017366. eCollection 2022.

Abstract

Evaluating the resiliency of power systems against abnormal operational conditions is crucial for adapting effective actions in planning and operation. This paper introduces the level-of-resilience (LoR) measure to assess power system resiliency in terms of the minimum number of faults needed to produce a system outage (blackout) under sequential topology attacks. Four deep reinforcement learning (DRL)-based agents: deep -network (DQN), double DQN, the REINFORCE (Monte-Carlo policy gradient), and REINFORCE with baseline are used to determine the LoR. In this paper, three case studies based on IEEE 6-bus test system are investigated. The results demonstrate that the double DQN network agent achieved the highest success rate, and it was the fastest among the other agents. Thus, it can be an efficient agent for resiliency evaluation.

摘要

评估电力系统对异常运行情况的弹性对于在规划和运行中采取有效措施至关重要。本文引入了弹性水平(LoR)度量标准,以根据在序贯拓扑攻击下导致系统停电(黑启动)所需的最少故障次数来评估电力系统的弹性。使用了四种基于深度强化学习(DRL)的代理:深度网络(DQN)、双 DQN、REINFORCE(蒙特卡罗策略梯度)和带基线的 REINFORCE,以确定 LoR。本文研究了基于 IEEE 6 母线测试系统的三个案例研究。结果表明,双 DQN 网络代理的成功率最高,而且在其他代理中速度最快。因此,它可以成为一种用于弹性评估的有效代理。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7907/9010153/22b33848f132/CIN2022-2017366.001.jpg

相似文献

1
Resiliency Assessment of Power Systems Using Deep Reinforcement Learning.
Comput Intell Neurosci. 2022 Apr 7;2022:2017366. doi: 10.1155/2022/2017366. eCollection 2022.
2
Approximate Policy-Based Accelerated Deep Reinforcement Learning.
IEEE Trans Neural Netw Learn Syst. 2020 Jun;31(6):1820-1830. doi: 10.1109/TNNLS.2019.2927227. Epub 2019 Aug 6.
3
Deep Reinforcement Learning With Modulated Hebbian Plus Q-Network Architecture.
IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):2045-2056. doi: 10.1109/TNNLS.2021.3110281. Epub 2022 May 2.
4
A Deep Reinforcement Learning-Based MPPT Control for PV Systems under Partial Shading Condition.
Sensors (Basel). 2020 May 27;20(11):3039. doi: 10.3390/s20113039.
5
Deep reinforcement learning for automated radiation adaptation in lung cancer.
Med Phys. 2017 Dec;44(12):6690-6705. doi: 10.1002/mp.12625. Epub 2017 Nov 14.
7
Teleconsultation dynamic scheduling with a deep reinforcement learning approach.
Artif Intell Med. 2024 Mar;149:102806. doi: 10.1016/j.artmed.2024.102806. Epub 2024 Feb 9.
8
Integrated Double Estimator Architecture for Reinforcement Learning.
IEEE Trans Cybern. 2022 May;52(5):3111-3122. doi: 10.1109/TCYB.2020.3023033. Epub 2022 May 19.
9
Minibatch Recursive Least Squares Q-Learning.
Comput Intell Neurosci. 2021 Oct 8;2021:5370281. doi: 10.1155/2021/5370281. eCollection 2021.
10
MonkeyKing: Adaptive Parameter Tuning on Big Data Platforms with Deep Reinforcement Learning.
Big Data. 2020 Aug;8(4):270-290. doi: 10.1089/big.2019.0123. Epub 2020 Jul 10.

引用本文的文献

1
Deep Reinforcement Learning-Based Trading Strategy for Load Aggregators on Price-Responsive Demand.
Comput Intell Neurosci. 2022 Sep 12;2022:6884956. doi: 10.1155/2022/6884956. eCollection 2022.
2
Integrated Clinical Environment Security Analysis Using Reinforcement Learning.
Bioengineering (Basel). 2022 Jun 13;9(6):253. doi: 10.3390/bioengineering9060253.

本文引用的文献

1
Availability Improvements through Data Slicing in PLC Smart Grid Networks.
Sensors (Basel). 2020 Dec 17;20(24):7256. doi: 10.3390/s20247256.
2
Deep learning.
Nature. 2015 May 28;521(7553):436-44. doi: 10.1038/nature14539.
3
Human-level control through deep reinforcement learning.
Nature. 2015 Feb 26;518(7540):529-33. doi: 10.1038/nature14236.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验