• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于深度强化学习的循环冷却水系统自适应控制。

Adaptive control for circulating cooling water system using deep reinforcement learning.

机构信息

School of Artificial Intelligence, Shenyang Aerospace University, Liaoning, China.

出版信息

PLoS One. 2024 Jul 24;19(7):e0307767. doi: 10.1371/journal.pone.0307767. eCollection 2024.

DOI:10.1371/journal.pone.0307767
PMID:39047030
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11268623/
Abstract

Due to the complex internal working process of circulating cooling water systems, most traditional control methods struggle to achieve stable and precise control. Therefore, this paper presents a novel adaptive control structure for the Twin Delayed Deep Deterministic Policy Gradient algorithm, which is based on a reference trajectory model (TD3-RTM). The structure is based on the Markov decision process of the recirculating cooling water system. Initially, the TD3 algorithm is employed to construct a deep reinforcement learning agent. Subsequently, a state space is selected, and a dense reward function is designed, considering the multivariable characteristics of the recirculating cooling water system. The agent updates its network based on different reward values obtained through interactions with the system, thereby gradually aligning the action values with the optimal policy. The TD3-RTM method introduces a reference trajectory model to accelerate the convergence speed of the agent and reduce oscillations and instability in the control system. Subsequently, simulation experiments were conducted in MATLAB/Simulink. The results show that compared to PID, fuzzy PID, DDPG and TD3, the TD3-RTM method improved the transient time in the flow loop by 6.09s, 5.29s, 0.57s, and 0.77s, respectively, and the Integral of Absolute Error(IAE) indexes decreased by 710.54, 335.1, 135.97, and 89.96, respectively, and the transient time in the temperature loop improved by 25.84s, 13.65s, 15.05s, and 0.81s, and the IAE metrics were reduced by 143.9, 59.13, 31.79, and 1.77, respectively. In addition, the overshooting of the TD3-RTM method in the flow loop was reduced by 17.64, 7.79, and 1.29 per cent, respectively, in comparison with the PID, the fuzzy PID, and the TD3.

摘要

由于循环冷却水系统内部工作过程复杂,大多数传统控制方法难以实现稳定、精确的控制。因此,本文提出了一种基于参考轨迹模型(TD3-RTM)的双延迟深度确定性策略梯度算法的自适应控制结构。该结构基于循环冷却水系统的马尔可夫决策过程。首先,采用 TD3 算法构建深度强化学习代理。然后,选择一个状态空间,并设计一个密集的奖励函数,考虑到循环冷却水系统的多变量特性。代理根据与系统交互获得的不同奖励值更新其网络,从而逐渐使动作值与最优策略保持一致。TD3-RTM 方法引入参考轨迹模型,以加快代理的收敛速度,并减少控制系统中的振荡和不稳定性。随后,在 MATLAB/Simulink 中进行了仿真实验。结果表明,与 PID、模糊 PID、DDPG 和 TD3 相比,TD3-RTM 方法分别将流量回路的瞬态时间提高了 6.09s、5.29s、0.57s 和 0.77s,积分绝对误差(IAE)指标分别降低了 710.54、335.1、135.97 和 89.96,温度回路的瞬态时间分别提高了 25.84s、13.65s、15.05s 和 0.81s,IAE 指标分别降低了 143.9、59.13、31.79 和 1.77。此外,与 PID、模糊 PID 和 TD3 相比,TD3-RTM 方法在流量回路中的超调量分别降低了 17.64%、7.79%和 1.29%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/db0d/11268623/17e506f030ba/pone.0307767.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/db0d/11268623/16bdf55d356f/pone.0307767.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/db0d/11268623/1f0163acf2f3/pone.0307767.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/db0d/11268623/fcc74dfcc0b3/pone.0307767.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/db0d/11268623/f766f5457e12/pone.0307767.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/db0d/11268623/0ba7c80f7821/pone.0307767.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/db0d/11268623/106d13c8e0b7/pone.0307767.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/db0d/11268623/912c38b7624a/pone.0307767.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/db0d/11268623/4bfa60c2adec/pone.0307767.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/db0d/11268623/17e506f030ba/pone.0307767.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/db0d/11268623/16bdf55d356f/pone.0307767.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/db0d/11268623/1f0163acf2f3/pone.0307767.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/db0d/11268623/fcc74dfcc0b3/pone.0307767.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/db0d/11268623/f766f5457e12/pone.0307767.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/db0d/11268623/0ba7c80f7821/pone.0307767.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/db0d/11268623/106d13c8e0b7/pone.0307767.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/db0d/11268623/912c38b7624a/pone.0307767.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/db0d/11268623/4bfa60c2adec/pone.0307767.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/db0d/11268623/17e506f030ba/pone.0307767.g009.jpg

相似文献

1
Adaptive control for circulating cooling water system using deep reinforcement learning.基于深度强化学习的循环冷却水系统自适应控制。
PLoS One. 2024 Jul 24;19(7):e0307767. doi: 10.1371/journal.pone.0307767. eCollection 2024.
2
Reinforcement learning based temperature control of a fermentation bioreactor for ethanol production.基于强化学习的用于乙醇生产的发酵生物反应器温度控制
Biotechnol Bioeng. 2024 Oct;121(10):3114-3127. doi: 10.1002/bit.28784. Epub 2024 Jun 27.
3
Model-Based Predictive Control and Reinforcement Learning for Planning Vehicle-Parking Trajectories for Vertical Parking Spaces.基于模型的预测控制与强化学习用于垂直停车位的车辆泊车轨迹规划
Sensors (Basel). 2023 Aug 11;23(16):7124. doi: 10.3390/s23167124.
4
Adaptive PI Controller Based on a Reinforcement Learning Algorithm for Speed Control of a DC Motor.基于强化学习算法的自适应PI控制器用于直流电动机速度控制
Biomimetics (Basel). 2023 Sep 19;8(5):434. doi: 10.3390/biomimetics8050434.
5
Improved Performance for PMSM Sensorless Control Based on Robust-Type Controller, ESO-Type Observer, Multiple Neural Networks, and RL-TD3 Agent.基于鲁棒型控制器、扩张状态观测器、多个神经网络和强化学习TD3智能体的永磁同步电机无传感器控制性能提升
Sensors (Basel). 2023 Jun 21;23(13):5799. doi: 10.3390/s23135799.
6
Closed-Loop Deep Brain Stimulation With Reinforcement Learning and Neural Simulation.基于强化学习和神经模拟的闭环深部脑刺激
IEEE Trans Neural Syst Rehabil Eng. 2024;32:3615-3624. doi: 10.1109/TNSRE.2024.3465243. Epub 2024 Sep 27.
7
End-to-End Autonomous Driving Decision Method Based on Improved TD3 Algorithm in Complex Scenarios.复杂场景下基于改进TD3算法的端到端自动驾驶决策方法
Sensors (Basel). 2024 Jul 31;24(15):4962. doi: 10.3390/s24154962.
8
Deep Reinforcement Learning-Based Accurate Control of Planetary Soft Landing.基于深度强化学习的行星软着陆精确控制
Sensors (Basel). 2021 Dec 6;21(23):8161. doi: 10.3390/s21238161.
9
Deep Reinforcement Learning for Indoor Mobile Robot Path Planning.深度强化学习在室内移动机器人路径规划中的应用。
Sensors (Basel). 2020 Sep 25;20(19):5493. doi: 10.3390/s20195493.
10
Path planning of mobile robot based on improved TD3 algorithm in dynamic environment.动态环境下基于改进TD3算法的移动机器人路径规划
Heliyon. 2024 May 31;10(11):e32167. doi: 10.1016/j.heliyon.2024.e32167. eCollection 2024 Jun 15.

本文引用的文献

1
Optimizing hyperparameters of deep reinforcement learning for autonomous driving based on whale optimization algorithm.基于鲸鱼优化算法优化自动驾驶中深度强化学习的超参数。
PLoS One. 2021 Jun 10;16(6):e0252754. doi: 10.1371/journal.pone.0252754. eCollection 2021.
2
Learning, exploitation and bias in games.游戏中的学习、利用和偏见。
PLoS One. 2021 Feb 5;16(2):e0246588. doi: 10.1371/journal.pone.0246588. eCollection 2021.
3
Learning agile and dynamic motor skills for legged robots.学习用于腿部机器人的敏捷和动态运动技能。
Sci Robot. 2019 Jan 16;4(26). doi: 10.1126/scirobotics.aau5872.
4
Mastering the game of Go without human knowledge.无需人类知识即可掌握围棋游戏。
Nature. 2017 Oct 18;550(7676):354-359. doi: 10.1038/nature24270.
5
Learning Multirobot Hose Transportation and Deployment by Distributed Round-Robin Q-Learning.通过分布式循环Q学习实现多机器人软管运输与部署学习
PLoS One. 2015 Jul 9;10(7):e0127129. doi: 10.1371/journal.pone.0127129. eCollection 2015.