Song Jimin, Jo HyungGi, Jin Yongsik, Lee Sang Jun
Division of Electronic Engineering, Jeonbuk National University, 567 Baekje-daero, Deokjin-gu, Jeonju 54896, Republic of Korea.
Daegu-Gyeongbuk Research Center, Electronics and Telecommunications Research Institute (ETRI), Daegu 42994, Republic of Korea.
Sensors (Basel). 2024 Oct 16;24(20):6665. doi: 10.3390/s24206665.
Simultaneous localization and mapping, a critical technology for enabling the autonomous driving of vehicles and mobile robots, increasingly incorporates multi-sensor configurations. Inertial measurement units (IMUs), known for their ability to measure acceleration and angular velocity, are widely utilized for motion estimation due to their cost efficiency. However, the inherent noise in IMU measurements necessitates the integration of additional sensors to facilitate spatial understanding for mapping. Visual-inertial odometry (VIO) is a prominent approach that combines cameras with IMUs, offering high spatial resolution while maintaining cost-effectiveness. In this paper, we introduce our uncertainty-aware depth network (UD-Net), which is designed to estimate both depth and uncertainty maps. We propose a novel loss function for the training of UD-Net, and unreliable depth values are filtered out to improve VIO performance based on the uncertainty maps. Experiments were conducted on the KITTI dataset and our custom dataset acquired from various driving scenarios. Experimental results demonstrated that the proposed VIO algorithm based on UD-Net outperforms previous methods with a significant margin.
同时定位与地图构建是实现车辆和移动机器人自动驾驶的关键技术,越来越多地采用多传感器配置。惯性测量单元(IMU)以其测量加速度和角速度的能力而闻名,由于其成本效益高,被广泛用于运动估计。然而,IMU测量中固有的噪声需要集成额外的传感器,以促进用于地图构建的空间理解。视觉惯性里程计(VIO)是一种将相机与IMU相结合的突出方法,在保持成本效益的同时提供高空间分辨率。在本文中,我们介绍了我们的不确定性感知深度网络(UD-Net),它旨在估计深度图和不确定性图。我们提出了一种用于训练UD-Net的新颖损失函数,并基于不确定性图滤除不可靠的深度值,以提高VIO性能。在KITTI数据集和我们从各种驾驶场景中获取的自定义数据集上进行了实验。实验结果表明,所提出的基于UD-Net的VIO算法显著优于以前的方法。