Xi'an Microelectronics Technology Institute, Xi'an 710065, China.
Sichuan Tengden Technology Co., Ltd., Chengdu 610037, China.
Sensors (Basel). 2021 Sep 20;21(18):6302. doi: 10.3390/s21186302.
Altitude estimation is one of the fundamental tasks of unmanned aerial vehicle (UAV) automatic navigation, where it aims to accurately and robustly estimate the relative altitude between the UAV and specific areas. However, most methods rely on auxiliary signal reception or expensive equipment, which are not always available, or applicable owing to signal interference, cost or power-consuming limitations in real application scenarios. In addition, fixed-wing UAVs have more complex kinematic models than vertical take-off and landing UAVs. Therefore, an altitude estimation method which can be robustly applied in a GPS denied environment for fixed-wing UAVs must be considered. In this paper, we present a method for high-precision altitude estimation that combines the vision information from a monocular camera and poses information from the inertial measurement unit (IMU) through a novel end-to-end deep neural network architecture. Our method has numerous advantages over existing approaches. First, we utilize the visual-inertial information and physics-based reasoning to build an ideal altitude model that provides general applicability and data efficiency for neural network learning. A further advantage is that we have designed a novel feature fusion module to simplify the tedious manual calibration and synchronization of the camera and IMU, which are required for the standard visual or visual-inertial methods to obtain the data association for altitude estimation modeling. Finally, the proposed method was evaluated, and validated using real flight data obtained during a fixed-wing UAV landing phase. The results show the average estimation error of our method is less than 3% of the actual altitude, which vastly improves the altitude estimation accuracy compared to other visual and visual-inertial based methods.
高度估计是无人机 (UAV) 自动导航的基本任务之一,旨在准确、稳健地估计 UAV 与特定区域之间的相对高度。然而,大多数方法依赖于辅助信号接收或昂贵的设备,这些在实际应用场景中并不总是可用或适用的,因为存在信号干扰、成本或功耗限制。此外,固定翼无人机的运动学模型比垂直起降无人机更复杂。因此,必须考虑一种能够在 GPS 受限制的环境中稳健应用于固定翼无人机的高度估计方法。在本文中,我们提出了一种结合单目相机的视觉信息和惯性测量单元 (IMU) 的姿态信息的高精度高度估计方法,通过一种新颖的端到端深度神经网络架构。我们的方法与现有方法相比具有许多优势。首先,我们利用视觉惯性信息和基于物理的推理来构建理想的高度模型,为神经网络学习提供了通用性和数据效率。另一个优点是,我们设计了一种新颖的特征融合模块,简化了相机和 IMU 的繁琐手动校准和同步,这是标准视觉或视觉惯性方法获取数据关联进行高度估计建模所必需的。最后,我们对所提出的方法进行了评估,并使用在固定翼无人机着陆阶段获得的实际飞行数据进行了验证。结果表明,我们的方法的平均估计误差小于实际高度的 3%,与其他基于视觉和视觉惯性的方法相比,大大提高了高度估计精度。