• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于移动机器人视觉惯性里程计的不确定性感知深度网络

Uncertainty-Aware Depth Network for Visual Inertial Odometry of Mobile Robots.

作者信息

Song Jimin, Jo HyungGi, Jin Yongsik, Lee Sang Jun

机构信息

Division of Electronic Engineering, Jeonbuk National University, 567 Baekje-daero, Deokjin-gu, Jeonju 54896, Republic of Korea.

Daegu-Gyeongbuk Research Center, Electronics and Telecommunications Research Institute (ETRI), Daegu 42994, Republic of Korea.

出版信息

Sensors (Basel). 2024 Oct 16;24(20):6665. doi: 10.3390/s24206665.

DOI:10.3390/s24206665
PMID:39460145
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11511567/
Abstract

Simultaneous localization and mapping, a critical technology for enabling the autonomous driving of vehicles and mobile robots, increasingly incorporates multi-sensor configurations. Inertial measurement units (IMUs), known for their ability to measure acceleration and angular velocity, are widely utilized for motion estimation due to their cost efficiency. However, the inherent noise in IMU measurements necessitates the integration of additional sensors to facilitate spatial understanding for mapping. Visual-inertial odometry (VIO) is a prominent approach that combines cameras with IMUs, offering high spatial resolution while maintaining cost-effectiveness. In this paper, we introduce our uncertainty-aware depth network (UD-Net), which is designed to estimate both depth and uncertainty maps. We propose a novel loss function for the training of UD-Net, and unreliable depth values are filtered out to improve VIO performance based on the uncertainty maps. Experiments were conducted on the KITTI dataset and our custom dataset acquired from various driving scenarios. Experimental results demonstrated that the proposed VIO algorithm based on UD-Net outperforms previous methods with a significant margin.

摘要

同时定位与地图构建是实现车辆和移动机器人自动驾驶的关键技术,越来越多地采用多传感器配置。惯性测量单元(IMU)以其测量加速度和角速度的能力而闻名,由于其成本效益高,被广泛用于运动估计。然而,IMU测量中固有的噪声需要集成额外的传感器,以促进用于地图构建的空间理解。视觉惯性里程计(VIO)是一种将相机与IMU相结合的突出方法,在保持成本效益的同时提供高空间分辨率。在本文中,我们介绍了我们的不确定性感知深度网络(UD-Net),它旨在估计深度图和不确定性图。我们提出了一种用于训练UD-Net的新颖损失函数,并基于不确定性图滤除不可靠的深度值,以提高VIO性能。在KITTI数据集和我们从各种驾驶场景中获取的自定义数据集上进行了实验。实验结果表明,所提出的基于UD-Net的VIO算法显著优于以前的方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7279/11511567/5f9cfc247a9b/sensors-24-06665-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7279/11511567/839f65d678e8/sensors-24-06665-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7279/11511567/565146ec4761/sensors-24-06665-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7279/11511567/48156909f193/sensors-24-06665-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7279/11511567/1a0f470248b7/sensors-24-06665-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7279/11511567/761f3e32994a/sensors-24-06665-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7279/11511567/5f9cfc247a9b/sensors-24-06665-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7279/11511567/839f65d678e8/sensors-24-06665-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7279/11511567/565146ec4761/sensors-24-06665-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7279/11511567/48156909f193/sensors-24-06665-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7279/11511567/1a0f470248b7/sensors-24-06665-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7279/11511567/761f3e32994a/sensors-24-06665-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7279/11511567/5f9cfc247a9b/sensors-24-06665-g006.jpg

相似文献

1
Uncertainty-Aware Depth Network for Visual Inertial Odometry of Mobile Robots.用于移动机器人视觉惯性里程计的不确定性感知深度网络
Sensors (Basel). 2024 Oct 16;24(20):6665. doi: 10.3390/s24206665.
2
SelfVIO: Self-supervised deep monocular Visual-Inertial Odometry and depth estimation.SelfVIO:自监督深度单目视觉惯性里程计和深度估计。
Neural Netw. 2022 Jun;150:119-136. doi: 10.1016/j.neunet.2022.03.005. Epub 2022 Mar 10.
3
ESVIO: Event-Based Stereo Visual-Inertial Odometry.ESVIO:基于事件的立体视觉惯性里程计。
Sensors (Basel). 2023 Feb 10;23(4):1998. doi: 10.3390/s23041998.
4
Robust Stereo Visual-Inertial Odometry Using Nonlinear Optimization.基于非线性优化的鲁棒立体视觉惯性里程计
Sensors (Basel). 2019 Aug 29;19(17):3747. doi: 10.3390/s19173747.
5
An Evaluation of MEMS-IMU Performance on the Absolute Trajectory Error of Visual-Inertial Navigation System.微机电系统惯性测量单元(MEMS-IMU)对视觉惯性导航系统绝对轨迹误差的性能评估
Micromachines (Basel). 2022 Apr 12;13(4):602. doi: 10.3390/mi13040602.
6
An Enhanced Hybrid Visual-Inertial Odometry System for Indoor Mobile Robot.一种用于室内移动机器人的增强型混合视觉惯性里程计系统。
Sensors (Basel). 2022 Apr 11;22(8):2930. doi: 10.3390/s22082930.
7
PL-VIO: Tightly-Coupled Monocular Visual-Inertial Odometry Using Point and Line Features.PL-VIO:使用点和线特征的紧密耦合单目视觉惯性里程计
Sensors (Basel). 2018 Apr 10;18(4):1159. doi: 10.3390/s18041159.
8
Unsupervised Deep Visual-Inertial Odometry with Online Error Correction for RGB-D Imagery.基于 RGB-D 图像的无监督深度视觉惯性里程计与在线误差校正
IEEE Trans Pattern Anal Mach Intell. 2020 Oct;42(10):2478-2493. doi: 10.1109/TPAMI.2019.2909895. Epub 2019 Apr 15.
9
Adaptive Monocular Visual-Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices.适用于移动设备实时增强现实应用的自适应单目视觉惯性同步定位与地图构建
Sensors (Basel). 2017 Nov 7;17(11):2567. doi: 10.3390/s17112567.
10
RGBD-Inertial Trajectory Estimation and Mapping for Ground Robots.地面机器人的RGBD惯性轨迹估计与建图
Sensors (Basel). 2019 May 15;19(10):2251. doi: 10.3390/s19102251.

本文引用的文献

1
SEG-SLAM: Dynamic Indoor RGB-D Visual SLAM Integrating Geometric and YOLOv5-Based Semantic Information.SEG-SLAM:集成几何信息与基于YOLOv5的语义信息的动态室内RGB-D视觉同步定位与地图构建
Sensors (Basel). 2024 Mar 25;24(7):2102. doi: 10.3390/s24072102.
2
Monocular Depth Estimation from a Fisheye Camera Based on Knowledge Distillation.基于知识蒸馏的鱼眼相机单目深度估计
Sensors (Basel). 2023 Dec 16;23(24):9866. doi: 10.3390/s23249866.
3
Visual SLAM-Based Robotic Mapping Method for Planetary Construction.基于视觉 SLAM 的行星构建机器人建图方法。
Sensors (Basel). 2021 Nov 19;21(22):7715. doi: 10.3390/s21227715.
4
GPS-SLAM: An Augmentation of the ORB-SLAM Algorithm.GPS-SLAM:ORB-SLAM 算法的增强。
Sensors (Basel). 2019 Nov 15;19(22):4973. doi: 10.3390/s19224973.
5
Exhaustive linearization for robust camera pose and focal length estimation.用于鲁棒相机位姿和焦距估计的完全线性化方法。
IEEE Trans Pattern Anal Mach Intell. 2013 Oct;35(10):2387-400. doi: 10.1109/TPAMI.2013.36.