• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过多智能体强化学习对处于……状态的湍流圆柱进行减阻的主动流动控制

Active Flow Control for Drag Reduction Through Multi-agent Reinforcement Learning on a Turbulent Cylinder at .

作者信息

Suárez Pol, Alcántara-Ávila Francisco, Miró Arnau, Rabault Jean, Font Bernat, Lehmkuhl Oriol, Vinuesa Ricardo

机构信息

FLOW, Engineering Mechanics, KTH Royal Institute of Technology, 100 44 Stockholm, Sweden.

Barcelona Supercomputing Center (BSC-CNS), 08034 Barcelona, Spain.

出版信息

Flow Turbul Combust. 2025;115(1):3-27. doi: 10.1007/s10494-025-00642-x. Epub 2025 Mar 5.

DOI:10.1007/s10494-025-00642-x
PMID:40406451
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12092499/
Abstract

This study presents novel drag reduction active-flow-control (AFC) strategies for a three-dimensional cylinder immersed in a flow at a Reynolds number based on freestream velocity and cylinder diameter of . The cylinder in this subcritical flow regime has been extensively studied in the literature and is considered a classic case of turbulent flow arising from a bluff body. The strategies presented are explored through the use of deep reinforcement learning. The cylinder is equipped with 10 independent zero-net-mass-flux jet pairs, distributed on the top and bottom surfaces, which define the AFC setup. The method is based on the coupling between a computational-fluid-dynamics solver and a multi-agent reinforcement-learning (MARL) framework using the proximal-policy-optimization algorithm. This work introduces a multi-stage training approach to expand the exploration space and enhance drag reduction stabilization. By accelerating training through the exploitation of local invariants with MARL, a drag reduction of approximately is achieved. The cooperative closed-loop strategy developed by the agents is sophisticated, as it utilizes a wide bandwidth of mass-flow-rate frequencies, which classical control methods are unable to match. Notably, the mass cost efficiency is demonstrated to be two orders of magnitude lower than that of classical control methods reported in the literature. These developments represent a significant advancement in active flow control in turbulent regimes, critical for industrial applications.

摘要

本研究提出了一种新颖的减阻主动流动控制(AFC)策略,用于处于雷诺数(基于自由流速度和圆柱直径)为 的流动中的三维圆柱体。在文献中,处于这种亚临界流动状态的圆柱体已得到广泛研究,并且被视为钝体产生湍流的经典案例。所提出的策略通过深度强化学习进行探索。圆柱体配备有10对独立的零净质量通量射流,分布在顶部和底部表面,这定义了AFC设置。该方法基于计算流体动力学求解器与使用近端策略优化算法的多智能体强化学习(MARL)框架之间的耦合。这项工作引入了一种多阶段训练方法,以扩大探索空间并增强减阻稳定性。通过利用MARL利用局部不变量来加速训练,实现了约 的减阻效果。智能体开发的协作闭环策略很复杂,因为它利用了经典控制方法无法匹配的宽质量流率频率带宽。值得注意的是,质量成本效率被证明比文献中报道的经典控制方法低两个数量级。这些进展代表了湍流状态下主动流动控制的重大进步,对工业应用至关重要。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/756b/12092499/5fc44bfe47bc/10494_2025_642_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/756b/12092499/900186c70c3c/10494_2025_642_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/756b/12092499/0c334e5e8d2c/10494_2025_642_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/756b/12092499/cc3e976ba506/10494_2025_642_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/756b/12092499/d23ac222bf6c/10494_2025_642_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/756b/12092499/20ecba12bfcd/10494_2025_642_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/756b/12092499/2edf0e2ad671/10494_2025_642_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/756b/12092499/38cd09b851d0/10494_2025_642_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/756b/12092499/4cfa1b85cea9/10494_2025_642_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/756b/12092499/bd79be97636a/10494_2025_642_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/756b/12092499/01c52f3a5c3a/10494_2025_642_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/756b/12092499/fefdadd72877/10494_2025_642_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/756b/12092499/5fc44bfe47bc/10494_2025_642_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/756b/12092499/900186c70c3c/10494_2025_642_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/756b/12092499/0c334e5e8d2c/10494_2025_642_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/756b/12092499/cc3e976ba506/10494_2025_642_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/756b/12092499/d23ac222bf6c/10494_2025_642_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/756b/12092499/20ecba12bfcd/10494_2025_642_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/756b/12092499/2edf0e2ad671/10494_2025_642_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/756b/12092499/38cd09b851d0/10494_2025_642_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/756b/12092499/4cfa1b85cea9/10494_2025_642_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/756b/12092499/bd79be97636a/10494_2025_642_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/756b/12092499/01c52f3a5c3a/10494_2025_642_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/756b/12092499/fefdadd72877/10494_2025_642_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/756b/12092499/5fc44bfe47bc/10494_2025_642_Fig12_HTML.jpg

相似文献

1
Active Flow Control for Drag Reduction Through Multi-agent Reinforcement Learning on a Turbulent Cylinder at .通过多智能体强化学习对处于……状态的湍流圆柱进行减阻的主动流动控制
Flow Turbul Combust. 2025;115(1):3-27. doi: 10.1007/s10494-025-00642-x. Epub 2025 Mar 5.
2
Reinforcement learning for bluff body active flow control in experiments and simulations.基于强化学习的实验与模拟中钝体的主动流动控制。
Proc Natl Acad Sci U S A. 2020 Oct 20;117(42):26091-26098. doi: 10.1073/pnas.2004939117. Epub 2020 Oct 5.
3
Drag reduction study of a microfiber-coated cylinder.微纤维涂层圆柱减阻研究。
Sci Rep. 2022 Sep 2;12(1):15022. doi: 10.1038/s41598-022-19302-5.
4
Deep reinforcement learning for turbulent drag reduction in channel flows.用于渠道流中湍流减阻的深度强化学习。
Eur Phys J E Soft Matter. 2023 Apr 11;46(4):27. doi: 10.1140/epje/s10189-023-00285-8.
5
Drag Reduction Using Polysaccharides in a Taylor⁻Couette Flow.在泰勒-库埃特流中使用多糖减阻
Polymers (Basel). 2017 Dec 7;9(12):683. doi: 10.3390/polym9120683.
6
Control of chaotic systems by deep reinforcement learning.基于深度强化学习的混沌系统控制
Proc Math Phys Eng Sci. 2019 Nov;475(2231):20190351. doi: 10.1098/rspa.2019.0351. Epub 2019 Nov 6.
7
Deep reinforcement learning for active flow control in a turbulent separation bubble.用于湍流分离泡中主动流动控制的深度强化学习
Nat Commun. 2025 Feb 7;16(1):1422. doi: 10.1038/s41467-025-56408-6.
8
The Twente turbulent Taylor-Couette (T3C) facility: strongly turbulent (multiphase) flow between two independently rotating cylinders.特温特湍流泰勒-库埃特(T3C)实验装置:两个独立旋转圆柱之间的强湍流(多相)流动。
Rev Sci Instrum. 2011 Feb;82(2):025105. doi: 10.1063/1.3548924.
9
Numerical Simulation and Deep Neural Network Revealed Drag Reduction of Microstructured Three-Dimensional Square Cylinders at High Reynolds Numbers.数值模拟与深度神经网络揭示了高雷诺数下微结构三维方柱体的减阻特性。
Front Bioeng Biotechnol. 2022 Jun 29;10:885962. doi: 10.3389/fbioe.2022.885962. eCollection 2022.
10
An off-policy multi-agent stochastic policy gradient algorithm for cooperative continuous control.一种用于合作连续控制的离策略多智能体随机策略梯度算法。
Neural Netw. 2024 Feb;170:610-621. doi: 10.1016/j.neunet.2023.11.046. Epub 2023 Nov 23.

本文引用的文献

1
Deep reinforcement learning for turbulent drag reduction in channel flows.用于渠道流中湍流减阻的深度强化学习。
Eur Phys J E Soft Matter. 2023 Apr 11;46(4):27. doi: 10.1140/epje/s10189-023-00285-8.
2
Mastering the game of Go with deep neural networks and tree search.用深度神经网络和树搜索掌握围棋游戏。
Nature. 2016 Jan 28;529(7587):484-9. doi: 10.1038/nature16961.