• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

解决潮汐涡轮机系统的零和控制问题:一种在线强化学习方法。

Solving the Zero-Sum Control Problem for Tidal Turbine System: An Online Reinforcement Learning Approach.

作者信息

Fang Haiyang, Zhang Maoguang, He Shuping, Luan Xiaoli, Liu Fei, Ding Zhengtao

出版信息

IEEE Trans Cybern. 2023 Dec;53(12):7635-7647. doi: 10.1109/TCYB.2022.3186886. Epub 2023 Nov 29.

DOI:10.1109/TCYB.2022.3186886
PMID:35839191
Abstract

A novel completely mode-free integral reinforcement learning (CMFIRL)-based iteration algorithm is proposed in this article to compute the two-player zero-sum games and the Nash equilibrium problems, that is, the optimal control policy pairs, for tidal turbine system based on continuous-time Markov jump linear model with exact transition probability and completely unknown dynamics. First, the tidal turbine system is modeled into Markov jump linear systems, followed by a designed subsystem transformation technique to decouple the jumping modes. Then, a completely mode-free reinforcement learning algorithm is employed to address the game-coupled algebraic Riccati equations without using the information of the system dynamics, in order to reach the Nash equilibrium. The learning algorithm includes one iteration loop by updating the control policy and the disturbance policy simultaneously. Also, the exploration signal is added for motivating the system, and the convergence of the CMFIRL iteration algorithm is rigorously proved. Finally, a simulation example is given to illustrate the effectiveness and applicability of the control design approach.

摘要

本文提出了一种基于新型完全无模式积分强化学习(CMFIRL)的迭代算法,用于计算基于具有精确转移概率和完全未知动态的连续时间马尔可夫跳跃线性模型的潮汐涡轮机系统的两人零和博弈及纳什均衡问题,即最优控制策略对。首先,将潮汐涡轮机系统建模为马尔可夫跳跃线性系统,接着采用设计的子系统变换技术来解耦跳跃模式。然后,使用一种完全无模式的强化学习算法,在不使用系统动态信息的情况下求解博弈耦合代数黎卡提方程,以达到纳什均衡。该学习算法通过同时更新控制策略和干扰策略包含一个迭代循环。此外,添加探索信号以激励系统,并严格证明了CMFIRL迭代算法的收敛性。最后,给出一个仿真例子来说明控制设计方法的有效性和适用性。

相似文献

1
Solving the Zero-Sum Control Problem for Tidal Turbine System: An Online Reinforcement Learning Approach.解决潮汐涡轮机系统的零和控制问题:一种在线强化学习方法。
IEEE Trans Cybern. 2023 Dec;53(12):7635-7647. doi: 10.1109/TCYB.2022.3186886. Epub 2023 Nov 29.
2
ADP-Based Decentralized Load Frequency Control Schemes to Multiarea Asynchronous Markov Jumping Power Systems With Experience Replay.
IEEE Trans Cybern. 2024 Nov;54(11):6997-7010. doi: 10.1109/TCYB.2024.3443867. Epub 2024 Oct 30.
3
Secure Control for Markov Jump Cyber-Physical Systems Subject to Malicious Attacks: A Resilient Hybrid Learning Scheme.遭受恶意攻击的马尔可夫跳跃网络物理系统的安全控制:一种弹性混合学习方案。
IEEE Trans Cybern. 2024 Nov;54(11):7068-7079. doi: 10.1109/TCYB.2024.3448407. Epub 2024 Oct 30.
4
Online Solution of Two-Player Zero-Sum Games for Continuous-Time Nonlinear Systems With Completely Unknown Dynamics.在线求解具有完全未知动态的连续时间非线性系统的二人零和博弈
IEEE Trans Neural Netw Learn Syst. 2016 Dec;27(12):2577-2587. doi: 10.1109/TNNLS.2015.2496299. Epub 2015 Nov 20.
5
Nonfragile Output Feedback Tracking Control for Markov Jump Fuzzy Systems Based on Integral Reinforcement Learning Scheme.基于积分增强学习方案的马尔可夫跳变模糊系统的非脆弱输出反馈跟踪控制。
IEEE Trans Cybern. 2023 Jul;53(7):4521-4530. doi: 10.1109/TCYB.2022.3203795. Epub 2023 Jun 15.
6
Policy Iteration Q-Learning for Data-Based Two-Player Zero-Sum Game of Linear Discrete-Time Systems.基于数据的线性离散时间系统两人零和博弈的策略迭代Q学习
IEEE Trans Cybern. 2021 Jul;51(7):3630-3640. doi: 10.1109/TCYB.2020.2970969. Epub 2021 Jun 23.
7
Discrete-Time Non-Zero-Sum Games With Completely Unknown Dynamics.具有完全未知动态的离散时间非零和博弈
IEEE Trans Cybern. 2021 Jun;51(6):2929-2943. doi: 10.1109/TCYB.2019.2957406. Epub 2021 May 18.
8
Off-Policy Integral Reinforcement Learning Method to Solve Nonlinear Continuous-Time Multiplayer Nonzero-Sum Games.基于非策略积分的强化学习方法求解非线性连续时间多人非零和博弈
IEEE Trans Neural Netw Learn Syst. 2017 Mar;28(3):704-713. doi: 10.1109/TNNLS.2016.2582849. Epub 2016 Jul 20.
9
Online Minimax Q Network Learning for Two-Player Zero-Sum Markov Games.用于两人零和马尔可夫博弈的在线极小极大Q网络学习
IEEE Trans Neural Netw Learn Syst. 2022 Mar;33(3):1228-1241. doi: 10.1109/TNNLS.2020.3041469. Epub 2022 Feb 28.
10
Fuzzy H Control of Discrete-Time Nonlinear Markov Jump Systems via a Novel Hybrid Reinforcement Q-Learning Method.基于一种新型混合强化Q学习方法的离散时间非线性马尔可夫跳跃系统的模糊H控制
IEEE Trans Cybern. 2023 Nov;53(11):7380-7391. doi: 10.1109/TCYB.2022.3220537. Epub 2023 Oct 17.

引用本文的文献

1
Adaptive Output Containment Tracking Control for Heterogeneous Wide-Area Networks with Aperiodic Intermittent Communication and Uncertain Leaders.具有非周期间歇通信和不确定领导者的异构广域网络的自适应输出约束跟踪控制
Sensors (Basel). 2023 Oct 22;23(20):8631. doi: 10.3390/s23208631.