• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于图的拓扑嵌入与深度强化学习在电力系统自主电压控制中的应用

Graph-Based Topological Embedding and Deep Reinforcement Learning for Autonomous Voltage Control in Power System.

作者信息

Wei Hongtao, Chang Siyu, Zhang Jiaming

机构信息

College of Information Engineering, Wuhan University of Technology, Wuhan 430070, China.

出版信息

Sensors (Basel). 2025 Jan 25;25(3):733. doi: 10.3390/s25030733.

DOI:10.3390/s25030733
PMID:39943372
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11820440/
Abstract

With increasing power system complexity and distributed energy penetration, traditional voltage control methods struggle with dynamic changes and complex conditions. While existing deep reinforcement learning (DRL) methods have advanced grid control, challenges persist in leveraging topological features and ensuring computational efficiency. To address these issues, this paper proposes a DRL method combining Graph Convolutional Networks (GCNs) and soft actor-critic (SAC) for voltage control through load shedding. The method uses GCNs to extract higher-order topological features of the power grid, enhancing the state representation capability, while the SAC optimizes the load shedding strategy in continuous action space, dynamically adjusting the control scheme to balance load shedding costs and voltage stability. Results from the simulation of the IEEE 39-bus system indicate that the proposed method significantly reduces the amount of load shedding, improves voltage recovery levels, and demonstrates strong control performance and robustness when dealing with complex disturbances and topological changes. This study provides an innovative solution to voltage control problems in smart grids.

摘要

随着电力系统复杂性的增加和分布式能源的渗透,传统的电压控制方法难以应对动态变化和复杂工况。虽然现有的深度强化学习(DRL)方法推动了电网控制的发展,但在利用拓扑特征和确保计算效率方面仍存在挑战。为解决这些问题,本文提出一种结合图卷积网络(GCN)和软演员-评论家(SAC)的DRL方法,用于通过切负荷实现电压控制。该方法使用GCN提取电网的高阶拓扑特征,增强状态表示能力,而SAC在连续动作空间中优化切负荷策略,动态调整控制方案以平衡切负荷成本和电压稳定性。IEEE 39节点系统的仿真结果表明,所提方法显著减少了切负荷量,提高了电压恢复水平,并且在处理复杂干扰和拓扑变化时展现出强大的控制性能和鲁棒性。本研究为智能电网中的电压控制问题提供了一种创新解决方案。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c33/11820440/074498927b98/sensors-25-00733-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c33/11820440/5cd5886af0cc/sensors-25-00733-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c33/11820440/f25fedc5480e/sensors-25-00733-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c33/11820440/5ac44f3740d1/sensors-25-00733-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c33/11820440/927ce6195cd9/sensors-25-00733-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c33/11820440/a4a4a3ed2ac5/sensors-25-00733-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c33/11820440/8592bbab235e/sensors-25-00733-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c33/11820440/1b405a41fb01/sensors-25-00733-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c33/11820440/074498927b98/sensors-25-00733-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c33/11820440/5cd5886af0cc/sensors-25-00733-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c33/11820440/f25fedc5480e/sensors-25-00733-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c33/11820440/5ac44f3740d1/sensors-25-00733-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c33/11820440/927ce6195cd9/sensors-25-00733-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c33/11820440/a4a4a3ed2ac5/sensors-25-00733-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c33/11820440/8592bbab235e/sensors-25-00733-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c33/11820440/1b405a41fb01/sensors-25-00733-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c33/11820440/074498927b98/sensors-25-00733-g008.jpg

相似文献

1
Graph-Based Topological Embedding and Deep Reinforcement Learning for Autonomous Voltage Control in Power System.基于图的拓扑嵌入与深度强化学习在电力系统自主电压控制中的应用
Sensors (Basel). 2025 Jan 25;25(3):733. doi: 10.3390/s25030733.
2
Multi-Agent Graph-Attention Deep Reinforcement Learning for Post-Contingency Grid Emergency Voltage Control.用于故障后电网紧急电压控制的多智能体图注意力深度强化学习
IEEE Trans Neural Netw Learn Syst. 2024 Mar;35(3):3340-3350. doi: 10.1109/TNNLS.2023.3341334. Epub 2024 Feb 29.
3
Deep Reinforcement Learning for Load Shedding Against Short-Term Voltage Instability in Large Power Systems.
IEEE Trans Neural Netw Learn Syst. 2023 Aug;34(8):4249-4260. doi: 10.1109/TNNLS.2021.3121757. Epub 2023 Aug 4.
4
Automatic Generation Control Based on Multiple Neural Networks With Actor-Critic Strategy.基于带有智能体-评论家策略的多个神经网络的自动发电控制
IEEE Trans Neural Netw Learn Syst. 2021 Jun;32(6):2483-2493. doi: 10.1109/TNNLS.2020.3006080. Epub 2021 Jun 2.
5
Broad Critic Deep Actor Reinforcement Learning for Continuous Control.用于连续控制的广义批评深度演员强化学习
IEEE Trans Neural Netw Learn Syst. 2025 Apr 8;PP. doi: 10.1109/TNNLS.2025.3554082.
6
Deep Reinforcement Learning for Charging Scheduling of Electric Vehicles Considering Distribution Network Voltage Stability.考虑配电网电压稳定性的电动汽车充电调度的深度强化学习
Sensors (Basel). 2023 Feb 2;23(3):1618. doi: 10.3390/s23031618.
7
A Time- and Space-Integrated Expansion Planning Method for AC/DC Hybrid Distribution Networks.一种交直流混合配电网的时空一体化扩展规划方法
Sensors (Basel). 2025 Apr 3;25(7):2276. doi: 10.3390/s25072276.
8
Graph Soft Actor-Critic Reinforcement Learning for Large-Scale Distributed Multirobot Coordination.用于大规模分布式多机器人协调的图软演员-评论家强化学习
IEEE Trans Neural Netw Learn Syst. 2025 Jan;36(1):665-676. doi: 10.1109/TNNLS.2023.3329530. Epub 2025 Jan 7.
9
Enhancing the Minimum Awareness Failure Distance in V2X Communications: A Deep Reinforcement Learning Approach.增强车联网通信中的最小感知失败距离:一种深度强化学习方法。
Sensors (Basel). 2024 Sep 20;24(18):6086. doi: 10.3390/s24186086.
10
Adaptive energy loss optimization in distributed networks using reinforcement learning-enhanced crow search algorithm.基于强化学习增强乌鸦搜索算法的分布式网络自适应能量损耗优化
Sci Rep. 2025 Apr 9;15(1):12165. doi: 10.1038/s41598-025-97354-z.

本文引用的文献

1
Human-level control through deep reinforcement learning.通过深度强化学习实现人类水平的控制。
Nature. 2015 Feb 26;518(7540):529-33. doi: 10.1038/nature14236.