School of Aeronautics and Astronautics, Sichuan University, Chengdu 610065, China.
College of Computer Science, Sichuan University, Chengdu 610065, China.
Sensors (Basel). 2022 Aug 28;22(17):6475. doi: 10.3390/s22176475.
The required navigation performance (RNP) procedure is one of the two basic navigation specifications for the performance-based navigation (PBN) procedure as proposed by the International Civil Aviation Organization (ICAO) through an integration of the global navigation infrastructures to improve the utilization efficiency of airspace and reduce flight delays and the dependence on ground navigation facilities. The approach stage is one of the most important and difficult stages in the whole flying. In this study, we proposed deep reinforcement learning (DRL)-based RNP procedure execution, DRL-RNP. By conducting an RNP approach procedure, the DRL algorithm was implemented, using a fixed-wing aircraft to explore a path of minimum fuel consumption with reward under windy conditions in compliance with the RNP safety specifications. The experimental results have demonstrated that the six degrees of freedom aircraft controlled by the DRL algorithm can successfully complete the RNP procedure whilst meeting the safety specifications for protection areas and obstruction clearance altitude in the whole procedure. In addition, the potential path with minimum fuel consumption can be explored effectively. Hence, the DRL method can be used not only to implement the RNP procedure with a simulated aircraft but also to help the verification and evaluation of the RNP procedure.
所需导航性能 (RNP) 程序是国际民用航空组织 (ICAO) 通过整合全球导航基础设施提出的基于性能的导航 (PBN) 程序的两个基本导航规范之一,旨在提高空域利用效率,减少飞行延误和对地面导航设施的依赖。进近阶段是整个飞行过程中最重要和最困难的阶段之一。在本研究中,我们提出了基于深度强化学习(DRL)的 RNP 程序执行方法,即 DRL-RNP。通过执行 RNP 进近程序,我们使用固定翼飞机在符合 RNP 安全规范的有风条件下,使用强化学习算法探索一条最小燃料消耗的路径,并获得奖励。实验结果表明,由 DRL 算法控制的六自由度飞机可以成功完成 RNP 程序,同时满足整个程序保护区和障碍物清除高度的安全规范。此外,还可以有效地探索具有最小燃料消耗的潜在路径。因此,DRL 方法不仅可以用于模拟飞机执行 RNP 程序,还可以帮助验证和评估 RNP 程序。