Suppr超能文献

用于渠道流中湍流减阻的深度强化学习。

Deep reinforcement learning for turbulent drag reduction in channel flows.

机构信息

FLOW, Engineering Mechanics, KTH Royal institute of Technology, 100 44, Stockholm, Sweden.

Swedish e-Science Research Centre (SeRC), 100 44, Stockholm, Sweden.

出版信息

Eur Phys J E Soft Matter. 2023 Apr 11;46(4):27. doi: 10.1140/epje/s10189-023-00285-8.

Abstract

We introduce a reinforcement learning (RL) environment to design and benchmark control strategies aimed at reducing drag in turbulent fluid flows enclosed in a channel. The environment provides a framework for computationally efficient, parallelized, high-fidelity fluid simulations, ready to interface with established RL agent programming interfaces. This allows for both testing existing deep reinforcement learning (DRL) algorithms against a challenging task, and advancing our knowledge of a complex, turbulent physical system that has been a major topic of research for over two centuries, and remains, even today, the subject of many unanswered questions. The control is applied in the form of blowing and suction at the wall, while the observable state is configurable, allowing to choose different variables such as velocity and pressure, in different locations of the domain. Given the complex nonlinear nature of turbulent flows, the control strategies proposed so far in the literature are physically grounded, but too simple. DRL, by contrast, enables leveraging the high-dimensional data that can be sampled from flow simulations to design advanced control strategies. In an effort to establish a benchmark for testing data-driven control strategies, we compare opposition control, a state-of-the-art turbulence-control strategy from the literature, and a commonly used DRL algorithm, deep deterministic policy gradient. Our results show that DRL leads to 43% and 30% drag reduction in a minimal and a larger channel (at a friction Reynolds number of 180), respectively, outperforming the classical opposition control by around 20 and 10 percentage points, respectively.

摘要

我们引入了强化学习(RL)环境,旨在设计和基准测试旨在减少通道内湍流流体流动阻力的控制策略。该环境提供了一个计算效率高、并行化、高保真度的流体模拟框架,准备与现有的 RL 代理编程接口接口。这既可以针对具有挑战性的任务测试现有的深度强化学习(DRL)算法,也可以推进我们对一个复杂的、湍流物理系统的认识,这个系统已经是两个多世纪的主要研究课题,即使在今天,仍然是许多未解决问题的主题。控制以壁面吹气和抽吸的形式施加,而可观察状态是可配置的,可以在域的不同位置选择不同的变量,例如速度和压力。鉴于湍流流动的复杂非线性性质,迄今为止文献中提出的控制策略在物理上是合理的,但过于简单。相比之下,RL 可以利用可以从流动模拟中采样的高维数据来设计先进的控制策略。为了建立一个用于测试数据驱动控制策略的基准,我们将对比文献中的先进的湍流控制策略——反控制和常用的 DRL 算法——深度确定性策略梯度。我们的结果表明,DRL 在最小和较大的通道(摩擦雷诺数为 180)中分别实现了 43%和 30%的阻力减少,分别比经典的反控制提高了约 20%和 10%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3698/10090012/d906667ad6d6/10189_2023_285_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验