• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

SelfVIO:自监督深度单目视觉惯性里程计和深度估计。

SelfVIO: Self-supervised deep monocular Visual-Inertial Odometry and depth estimation.

机构信息

Computer Science Department, The University of Oxford, UK.

Institute of Biomedical Engineering, Bogazici University, Turkey.

出版信息

Neural Netw. 2022 Jun;150:119-136. doi: 10.1016/j.neunet.2022.03.005. Epub 2022 Mar 10.

DOI:10.1016/j.neunet.2022.03.005
PMID:35313245
Abstract

In the last decade, numerous supervised deep learning approaches have been proposed for visual-inertial odometry (VIO) and depth map estimation, which require large amounts of labelled data. To overcome the data limitation, self-supervised learning has emerged as a promising alternative that exploits constraints such as geometric and photometric consistency in the scene. In this study, we present a novel self-supervised deep learning-based VIO and depth map recovery approach (SelfVIO) using adversarial training and self-adaptive visual-inertial sensor fusion. SelfVIO learns the joint estimation of 6 degrees-of-freedom (6-DoF) ego-motion and a depth map of the scene from unlabelled monocular RGB image sequences and inertial measurement unit (IMU) readings. The proposed approach is able to perform VIO without requiring IMU intrinsic parameters and/or extrinsic calibration between IMU and the camera. We provide comprehensive quantitative and qualitative evaluations of the proposed framework and compare its performance with state-of-the-art VIO, VO, and visual simultaneous localization and mapping (VSLAM) approaches on the KITTI, EuRoC and Cityscapes datasets. Detailed comparisons prove that SelfVIO outperforms state-of-the-art VIO approaches in terms of pose estimation and depth recovery, making it a promising approach among existing methods in the literature.

摘要

在过去的十年中,已经提出了许多用于视觉惯性里程计 (VIO) 和深度图估计的监督深度学习方法,这些方法都需要大量的标记数据。为了克服数据限制,自监督学习作为一种很有前途的替代方法出现了,它利用了场景中的几何和光度一致性等约束条件。在这项研究中,我们提出了一种新颖的基于深度对抗学习和自适应视觉惯性传感器融合的自监督深度学习 VIO 和深度图恢复方法 (SelfVIO)。SelfVIO 从未标记的单目 RGB 图像序列和惯性测量单元 (IMU) 读数中学习场景的 6 自由度 (6-DoF) 自身运动和深度图的联合估计。所提出的方法能够在不需要 IMU 固有参数和/或 IMU 和相机之间的外部校准的情况下执行 VIO。我们对所提出的框架进行了全面的定量和定性评估,并将其性能与 KITTI、EuRoC 和 Cityscapes 数据集上的最新 VIO、VO 和视觉同时定位和映射 (VSLAM) 方法进行了比较。详细的比较证明,SelfVIO 在姿态估计和深度恢复方面优于最新的 VIO 方法,使其成为文献中现有方法中很有前途的方法。

相似文献

1
SelfVIO: Self-supervised deep monocular Visual-Inertial Odometry and depth estimation.SelfVIO:自监督深度单目视觉惯性里程计和深度估计。
Neural Netw. 2022 Jun;150:119-136. doi: 10.1016/j.neunet.2022.03.005. Epub 2022 Mar 10.
2
Unsupervised Deep Visual-Inertial Odometry with Online Error Correction for RGB-D Imagery.基于 RGB-D 图像的无监督深度视觉惯性里程计与在线误差校正
IEEE Trans Pattern Anal Mach Intell. 2020 Oct;42(10):2478-2493. doi: 10.1109/TPAMI.2019.2909895. Epub 2019 Apr 15.
3
Visual-Inertial Odometry with Robust Initialization and Online Scale Estimation.视觉惯性里程计的鲁棒初始化和在线尺度估计。
Sensors (Basel). 2018 Dec 5;18(12):4287. doi: 10.3390/s18124287.
4
Plane-Aided Visual-Inertial Odometry for 6-DOF Pose Estimation of a Robotic Navigation Aid.用于机器人导航辅助设备6自由度位姿估计的平面辅助视觉惯性里程计
IEEE Access. 2020;8:90042-90051. doi: 10.1109/access.2020.2994299. Epub 2020 May 12.
5
Adversarial Learning for Joint Optimization of Depth and Ego-Motion.用于深度和自我运动联合优化的对抗学习
IEEE Trans Image Process. 2020 Jan 28. doi: 10.1109/TIP.2020.2968751.
6
Robust Stereo Visual-Inertial Odometry Using Nonlinear Optimization.基于非线性优化的鲁棒立体视觉惯性里程计
Sensors (Basel). 2019 Aug 29;19(17):3747. doi: 10.3390/s19173747.
7
ESVIO: Event-Based Stereo Visual-Inertial Odometry.ESVIO:基于事件的立体视觉惯性里程计。
Sensors (Basel). 2023 Feb 10;23(4):1998. doi: 10.3390/s23041998.
8
PL-VIO: Tightly-Coupled Monocular Visual-Inertial Odometry Using Point and Line Features.PL-VIO:使用点和线特征的紧密耦合单目视觉惯性里程计
Sensors (Basel). 2018 Apr 10;18(4):1159. doi: 10.3390/s18041159.
9
Online Spatial and Temporal Calibration for Monocular Direct Visual-Inertial Odometry.单目直接视觉惯性里程计的在线时空校准
Sensors (Basel). 2019 May 16;19(10):2273. doi: 10.3390/s19102273.
10
Cycle-SfM: Joint self-supervised learning of depth and camera motion from monocular image sequences.循环 SfM:基于单目图像序列的深度和相机运动联合自监督学习。
Chaos. 2019 Dec;29(12):123102. doi: 10.1063/1.5120605.

引用本文的文献

1
From Pixels to Precision: A Survey of Monocular Visual Odometry in Digital Twin Applications.从像素到精度:数字孪生应用中的单目视觉里程计综述
Sensors (Basel). 2024 Feb 17;24(4):1274. doi: 10.3390/s24041274.
2
Pose estimation via structure-depth information from monocular endoscopy images sequence.通过单目内窥镜图像序列的结构深度信息进行姿态估计。
Biomed Opt Express. 2023 Dec 22;15(1):460-478. doi: 10.1364/BOE.498262. eCollection 2024 Jan 1.
3
Deep learning-based robust positioning for all-weather autonomous driving.基于深度学习的全天候自动驾驶稳健定位
Nat Mach Intell. 2022;4(9):749-760. doi: 10.1038/s42256-022-00520-5. Epub 2022 Sep 8.
4
VIAE-Net: An End-to-End Altitude Estimation through Monocular Vision and Inertial Feature Fusion Neural Networks for UAV Autonomous Landing.VIAE-Net:一种基于单目视觉和惯性特征融合神经网络的端到端无人机自主着陆高度估计方法。
Sensors (Basel). 2021 Sep 20;21(18):6302. doi: 10.3390/s21186302.
5
Integrating Sensor Models in Deep Learning Boosts Performance: Application to Monocular Depth Estimation in Warehouse Automation.深度学习中整合传感器模型可提高性能:在仓库自动化中单目深度估计中的应用。
Sensors (Basel). 2021 Feb 19;21(4):1437. doi: 10.3390/s21041437.