Cong Yangzi, Su Wenbin, Jiang Nan, Zong Wenpeng, Li Long, Xu Yan, Xu Tianhe, Wu Paipai
Institute of Space Sciences, Shandong University, Weihai 264209, China.
China Research Institute of Radio Wave Propagation, Qingdao 266107, China.
Sensors (Basel). 2025 Aug 1;25(15):4745. doi: 10.3390/s25154745.
In a variety of UAV applications, visual-inertial navigation systems (VINSs) play a crucial role in providing accurate positioning and navigation solutions. However, traditional VINS struggle to adapt flexibly to varying environmental conditions due to fixed covariance matrix settings. This limitation becomes especially acute during high-speed drone operations, where motion blur and fluctuating image clarity can significantly compromise navigation accuracy and system robustness. To address these issues, we propose an innovative adaptive covariance matrix estimation method for UAV-based VINS using Gaussian formulas. Our approach enhances the accuracy and robustness of the navigation system by dynamically adjusting the covariance matrix according to the quality of the images. Leveraging the advanced Laplacian operator, detailed assessments of image blur are performed, thereby achieving precise perception of image quality. Based on these assessments, a novel mechanism is introduced for dynamically adjusting the visual covariance matrix using a Gaussian model according to the clarity of images in the current environment. Extensive simulation experiments across the EuRoC and TUM VI datasets, as well as the field tests, have validated our method, demonstrating significant improvements in navigation accuracy of drones in scenarios with motion blur. Our algorithm has shown significantly higher accuracy compared to the famous VINS-Mono framework, outperforming it by 18.18% on average, as well as the optimization rate of RMS, which reaches 65.66% for the F1 dataset and 41.74% for F2 in the field tests outdoors.
在各种无人机应用中,视觉惯性导航系统(VINS)在提供精确的定位和导航解决方案方面发挥着关键作用。然而,由于固定的协方差矩阵设置,传统的VINS难以灵活适应不断变化的环境条件。在高速无人机操作过程中,这种限制变得尤为突出,因为运动模糊和图像清晰度波动会显著影响导航精度和系统鲁棒性。为了解决这些问题,我们提出了一种基于高斯公式的无人机VINS创新自适应协方差矩阵估计方法。我们的方法通过根据图像质量动态调整协方差矩阵,提高了导航系统的准确性和鲁棒性。利用先进的拉普拉斯算子,对图像模糊进行详细评估,从而实现对图像质量的精确感知。基于这些评估,引入了一种新颖的机制,根据当前环境中图像的清晰度,使用高斯模型动态调整视觉协方差矩阵。在EuRoC和TUM VI数据集上进行的广泛模拟实验以及现场测试验证了我们的方法,证明了在存在运动模糊的场景中无人机导航精度有显著提高。我们的算法与著名的VINS-Mono框架相比,显示出显著更高的精度,在户外现场测试中,在F1数据集上平均比其性能高出18.18%,RMS的优化率在F1数据集上达到65.66%,在F2数据集上达到41.74%。