• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于深度强化学习的优化 RNP 飞行程序执行。

DRL-RNP: Deep Reinforcement Learning-Based Optimized RNP Flight Procedure Execution.

机构信息

School of Aeronautics and Astronautics, Sichuan University, Chengdu 610065, China.

College of Computer Science, Sichuan University, Chengdu 610065, China.

出版信息

Sensors (Basel). 2022 Aug 28;22(17):6475. doi: 10.3390/s22176475.

DOI:10.3390/s22176475
PMID:36080933
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9460910/
Abstract

The required navigation performance (RNP) procedure is one of the two basic navigation specifications for the performance-based navigation (PBN) procedure as proposed by the International Civil Aviation Organization (ICAO) through an integration of the global navigation infrastructures to improve the utilization efficiency of airspace and reduce flight delays and the dependence on ground navigation facilities. The approach stage is one of the most important and difficult stages in the whole flying. In this study, we proposed deep reinforcement learning (DRL)-based RNP procedure execution, DRL-RNP. By conducting an RNP approach procedure, the DRL algorithm was implemented, using a fixed-wing aircraft to explore a path of minimum fuel consumption with reward under windy conditions in compliance with the RNP safety specifications. The experimental results have demonstrated that the six degrees of freedom aircraft controlled by the DRL algorithm can successfully complete the RNP procedure whilst meeting the safety specifications for protection areas and obstruction clearance altitude in the whole procedure. In addition, the potential path with minimum fuel consumption can be explored effectively. Hence, the DRL method can be used not only to implement the RNP procedure with a simulated aircraft but also to help the verification and evaluation of the RNP procedure.

摘要

所需导航性能 (RNP) 程序是国际民用航空组织 (ICAO) 通过整合全球导航基础设施提出的基于性能的导航 (PBN) 程序的两个基本导航规范之一,旨在提高空域利用效率,减少飞行延误和对地面导航设施的依赖。进近阶段是整个飞行过程中最重要和最困难的阶段之一。在本研究中,我们提出了基于深度强化学习(DRL)的 RNP 程序执行方法,即 DRL-RNP。通过执行 RNP 进近程序,我们使用固定翼飞机在符合 RNP 安全规范的有风条件下,使用强化学习算法探索一条最小燃料消耗的路径,并获得奖励。实验结果表明,由 DRL 算法控制的六自由度飞机可以成功完成 RNP 程序,同时满足整个程序保护区和障碍物清除高度的安全规范。此外,还可以有效地探索具有最小燃料消耗的潜在路径。因此,DRL 方法不仅可以用于模拟飞机执行 RNP 程序,还可以帮助验证和评估 RNP 程序。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/e8c128f9f601/sensors-22-06475-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/4fd783663949/sensors-22-06475-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/54a53c4f31a2/sensors-22-06475-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/d28005a40269/sensors-22-06475-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/c561e156d73b/sensors-22-06475-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/1fe60fbd0029/sensors-22-06475-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/05289f6c4d3d/sensors-22-06475-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/b0db6a0def4e/sensors-22-06475-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/4add99f49bb8/sensors-22-06475-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/4f5177155827/sensors-22-06475-g009a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/f80f6a89fb91/sensors-22-06475-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/49470fde9148/sensors-22-06475-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/787317d62192/sensors-22-06475-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/24cdcca69535/sensors-22-06475-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/f7eb935e6758/sensors-22-06475-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/36e31a74eb7c/sensors-22-06475-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/385001ade3e6/sensors-22-06475-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/db47310f5c8d/sensors-22-06475-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/e8c128f9f601/sensors-22-06475-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/4fd783663949/sensors-22-06475-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/54a53c4f31a2/sensors-22-06475-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/d28005a40269/sensors-22-06475-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/c561e156d73b/sensors-22-06475-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/1fe60fbd0029/sensors-22-06475-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/05289f6c4d3d/sensors-22-06475-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/b0db6a0def4e/sensors-22-06475-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/4add99f49bb8/sensors-22-06475-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/4f5177155827/sensors-22-06475-g009a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/f80f6a89fb91/sensors-22-06475-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/49470fde9148/sensors-22-06475-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/787317d62192/sensors-22-06475-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/24cdcca69535/sensors-22-06475-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/f7eb935e6758/sensors-22-06475-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/36e31a74eb7c/sensors-22-06475-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/385001ade3e6/sensors-22-06475-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/db47310f5c8d/sensors-22-06475-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/987f/9460910/e8c128f9f601/sensors-22-06475-g018.jpg

相似文献

1
DRL-RNP: Deep Reinforcement Learning-Based Optimized RNP Flight Procedure Execution.基于深度强化学习的优化 RNP 飞行程序执行。
Sensors (Basel). 2022 Aug 28;22(17):6475. doi: 10.3390/s22176475.
2
Predictive hierarchical reinforcement learning for path-efficient mapless navigation with moving target.具有移动目标的无图路径高效导航的预测分层强化学习。
Neural Netw. 2023 Aug;165:677-688. doi: 10.1016/j.neunet.2023.06.007. Epub 2023 Jun 10.
3
Enhanced Vertical Navigation Using Barometric Measurements.利用气压测量实现增强型垂直导航。
Sensors (Basel). 2022 Nov 28;22(23):9263. doi: 10.3390/s22239263.
4
Deep reinforcement learning-aided autonomous navigation with landmark generators.基于地标生成器的深度强化学习辅助自主导航。
Front Neurorobot. 2023 Aug 22;17:1200214. doi: 10.3389/fnbot.2023.1200214. eCollection 2023.
5
Adaptive Navigation Performance Evaluation Method for Civil Aircraft Navigation Systems with Unknown Time-Varying Sensor Noise.具有未知时变传感器噪声的民用飞机导航系统自适应导航性能评估方法
Sensors (Basel). 2024 Aug 6;24(16):5093. doi: 10.3390/s24165093.
6
Decision-Making for the Autonomous Navigation of Maritime Autonomous Surface Ships Based on Scene Division and Deep Reinforcement Learning.基于场景划分和深度强化学习的航海自主水面船舶自主导航决策。
Sensors (Basel). 2019 Sep 19;19(18):4055. doi: 10.3390/s19184055.
7
The Impact of LiDAR Configuration on Goal-Based Navigation within a Deep Reinforcement Learning Framework.激光雷达配置对深度强化学习框架内基于目标的导航的影响
Sensors (Basel). 2023 Dec 9;23(24):9732. doi: 10.3390/s23249732.
8
Learning Reward Function with Matching Network for Mapless Navigation.基于匹配网络的无图导航学习奖励函数
Sensors (Basel). 2020 Jun 30;20(13):3664. doi: 10.3390/s20133664.
9
Improved Artificial Potential Field Algorithm Assisted by Multisource Data for AUV Path Planning.多源数据辅助的改进人工势场算法用于自主水下航行器路径规划
Sensors (Basel). 2023 Jul 26;23(15):6680. doi: 10.3390/s23156680.
10
A Multi-Dimensional Goal Aircraft Guidance Approach Based on Reinforcement Learning with a Reward Shaping Algorithm.基于强化学习的具有奖励塑造算法的多维目标飞机制导方法。
Sensors (Basel). 2021 Aug 21;21(16):5643. doi: 10.3390/s21165643.

引用本文的文献

1
Research on the Collision Risk of Fusion Operation of Manned Aircraft and Unmanned Aircraft at Zigong Airport.自贡机场有人机与无人机融合运行碰撞风险研究
Sensors (Basel). 2024 Jul 25;24(15):4842. doi: 10.3390/s24154842.

本文引用的文献

1
A Multi-Dimensional Goal Aircraft Guidance Approach Based on Reinforcement Learning with a Reward Shaping Algorithm.基于强化学习的具有奖励塑造算法的多维目标飞机制导方法。
Sensors (Basel). 2021 Aug 21;21(16):5643. doi: 10.3390/s21165643.
2
Learning for a Robot: Deep Reinforcement Learning, Imitation Learning, Transfer Learning.机器人学习:深度强化学习、模仿学习、迁移学习。
Sensors (Basel). 2021 Feb 11;21(4):1278. doi: 10.3390/s21041278.
3
An Autonomous Path Planning Model for Unmanned Ships Based on Deep Reinforcement Learning.
基于深度强化学习的无人船自主路径规划模型。
Sensors (Basel). 2020 Jan 11;20(2):426. doi: 10.3390/s20020426.
4
Mastering the game of Go with deep neural networks and tree search.用深度神经网络和树搜索掌握围棋游戏。
Nature. 2016 Jan 28;529(7587):484-9. doi: 10.1038/nature16961.
5
A unified analysis of value-function-based reinforcement- learning algorithms.基于价值函数的强化学习算法的统一分析。
Neural Comput. 1999 Nov 15;11(8):2017-59. doi: 10.1162/089976699300016070.