Suppr超能文献

基于物理信息的强化学习在仿鱼游动机器人运动控制中的应用。

Physics-informed reinforcement learning for motion control of a fish-like swimming robot.

机构信息

Department of Mechanical Engineering, Clemson University, Clemson, SC, 29634, USA.

出版信息

Sci Rep. 2023 Jul 3;13(1):10754. doi: 10.1038/s41598-023-36399-4.

Abstract

Motion control of fish-like swimming robots presents many challenges due to the unstructured environment and unmodelled governing physics of the fluid-robot interaction. Commonly used low-fidelity control models using simplified formulas for drag and lift forces do not capture key physics that can play an important role in the dynamics of small-sized robots with limited actuation. Deep Reinforcement Learning (DRL) holds considerable promise for motion control of robots with complex dynamics. Reinforcement learning methods require large amounts of training data exploring a large subset of the relevant state space, which can be expensive, time consuming, or unsafe to obtain. Data from simulations can be used in the initial stages of DRL, but in the case of swimming robots, the complexity of fluid-body interactions makes large numbers of simulations infeasible from the perspective of time and computational resources. Surrogate models that capture the primary physics of the system can be a useful starting point for training a DRL agent which is subsequently transferred to train with a higher fidelity simulation. We demonstrate the utility of such physics-informed reinforcement learning to train a policy that can enable velocity and path tracking for a planar swimming (fish-like) rigid Joukowski hydrofoil. This is done through a curriculum where the DRL agent is first trained to track limit cycles in a velocity space for a representative nonholonomic system, and then transferred to train on a small simulation data set of the swimmer. The results show the utility of physics-informed reinforcement learning for the control of fish-like swimming robots.

摘要

由于水下环境的非结构化以及流固耦合物理机制的不可建模性,鱼类仿生机器人的运动控制面临着诸多挑战。通常使用的基于简化阻力和升力公式的低保真度控制模型无法捕捉到关键物理特性,而这些特性在具有有限驱动能力的小型机器人动力学中起着重要作用。深度强化学习(DRL)在具有复杂动力学的机器人运动控制方面具有很大的应用潜力。强化学习方法需要大量的训练数据来探索相关状态空间的一个大子集,而这在获取方面可能既昂贵、耗时又不安全。来自模拟的数据可以在 DRL 的初始阶段使用,但是对于游泳机器人来说,由于流固相互作用的复杂性,从时间和计算资源的角度来看,大量的模拟是不可行的。可以捕捉系统主要物理特性的代理模型可以作为训练 DRL 代理的有用起点,然后将其转移到使用更高保真度的模拟进行训练。我们展示了这种物理启发式强化学习在训练策略方面的有效性,该策略可以使平面游泳(鱼类仿生)刚性 Joukowski 水翼实现速度和路径跟踪。这是通过一个课程来实现的,其中 DRL 代理首先在一个非完整系统的速度空间中训练跟踪极限环,然后转移到游泳者的小仿真数据集上进行训练。结果表明,物理启发式强化学习在鱼类仿生机器人的控制方面具有很好的应用前景。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c013/10318098/4a6c7fecb887/41598_2023_36399_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验