• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

单目直接视觉惯性里程计的在线时空校准

Online Spatial and Temporal Calibration for Monocular Direct Visual-Inertial Odometry.

作者信息

Feng Zheyu, Li Jianwen, Zhang Lundong, Chen Chen

机构信息

Information Engineering University, Zhengzhou 450001, China.

出版信息

Sensors (Basel). 2019 May 16;19(10):2273. doi: 10.3390/s19102273.

DOI:10.3390/s19102273
PMID:31100933
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6567321/
Abstract

Owing to the nonlinearity in visual-inertial state estimation, sufficiently accurate initial states, especially the spatial and temporal parameters between IMU (Inertial Measurement Unit) and camera, should be provided to avoid divergence. Moreover, these parameters are required to be calibrated online since they are likely to vary once the mechanical configuration slightly changes. Recently, direct approaches have gained popularity for their better performance than feature-based approaches in little-texture or low-illumination environments, taking advantage of tracking pixels directly. Based on these considerations, we perform a direct version of monocular VIO (Visual-inertial Odometry), and propose a novel approach to initialize the spatial-temporal parameters and estimate them with all other variables of interest (IMU pose, point inverse depth, etc.). We highlight that our approach is able to perform robust and accurate initialization and online calibration for the spatial and temporal parameters without utilizing any prior information, and also achieves high-precision estimates even when large temporal offset occurs. The performance of the proposed approach was verified through the public UAV (Unmanned Aerial Vehicle) dataset.

摘要

由于视觉惯性状态估计中的非线性特性,需要提供足够准确的初始状态,尤其是惯性测量单元(IMU)和相机之间的空间和时间参数,以避免估计结果发散。此外,这些参数需要在线校准,因为一旦机械结构稍有变化,它们就可能改变。近年来,直接方法因其在少纹理或低光照环境下比基于特征的方法具有更好的性能而受到欢迎,它直接利用像素跟踪。基于这些考虑,我们实现了单目视觉惯性里程计(VIO)的直接版本,并提出了一种新颖的方法来初始化时空参数,并与所有其他感兴趣的变量(IMU姿态、点反深度等)一起对其进行估计。我们强调,我们的方法能够在不利用任何先验信息的情况下,对空间和时间参数进行稳健且准确的初始化和在线校准,并且即使在出现较大时间偏移时也能实现高精度估计。所提方法的性能通过公开的无人机(UAV)数据集得到了验证。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1649/6567321/d5cb09d49500/sensors-19-02273-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1649/6567321/ba42082039b2/sensors-19-02273-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1649/6567321/ebfd37702f9c/sensors-19-02273-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1649/6567321/6fd2ee4bdf49/sensors-19-02273-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1649/6567321/95b7dd4f5ca6/sensors-19-02273-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1649/6567321/d5cb09d49500/sensors-19-02273-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1649/6567321/ba42082039b2/sensors-19-02273-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1649/6567321/ebfd37702f9c/sensors-19-02273-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1649/6567321/6fd2ee4bdf49/sensors-19-02273-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1649/6567321/95b7dd4f5ca6/sensors-19-02273-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1649/6567321/d5cb09d49500/sensors-19-02273-g005.jpg

相似文献

1
Online Spatial and Temporal Calibration for Monocular Direct Visual-Inertial Odometry.单目直接视觉惯性里程计的在线时空校准
Sensors (Basel). 2019 May 16;19(10):2273. doi: 10.3390/s19102273.
2
Optimization-Based Online Initialization and Calibration of Monocular Visual-Inertial Odometry Considering Spatial-Temporal Constraints.基于优化的考虑时空约束的单目视觉惯性里程计在线初始化与校准
Sensors (Basel). 2021 Apr 10;21(8):2673. doi: 10.3390/s21082673.
3
Visual-Inertial Odometry with Robust Initialization and Online Scale Estimation.视觉惯性里程计的鲁棒初始化和在线尺度估计。
Sensors (Basel). 2018 Dec 5;18(12):4287. doi: 10.3390/s18124287.
4
Unsupervised Deep Visual-Inertial Odometry with Online Error Correction for RGB-D Imagery.基于 RGB-D 图像的无监督深度视觉惯性里程计与在线误差校正
IEEE Trans Pattern Anal Mach Intell. 2020 Oct;42(10):2478-2493. doi: 10.1109/TPAMI.2019.2909895. Epub 2019 Apr 15.
5
Online IMU Self-Calibration for Visual-Inertial Systems.视觉惯性系统的在线惯性测量单元自校准
Sensors (Basel). 2019 Apr 4;19(7):1624. doi: 10.3390/s19071624.
6
Latency Compensated Visual-Inertial Odometry for Agile Autonomous Flight.用于敏捷自主飞行的延迟补偿视觉惯性里程计。
Sensors (Basel). 2020 Apr 14;20(8):2209. doi: 10.3390/s20082209.
7
SelfVIO: Self-supervised deep monocular Visual-Inertial Odometry and depth estimation.SelfVIO:自监督深度单目视觉惯性里程计和深度估计。
Neural Netw. 2022 Jun;150:119-136. doi: 10.1016/j.neunet.2022.03.005. Epub 2022 Mar 10.
8
VINS-MKF:A Tightly-Coupled Multi-Keyframe Visual-Inertial Odometry for Accurate and Robust State Estimation.VINS-MKF:一种紧耦合多关键帧视觉惯性里程计,用于实现精确和鲁棒的状态估计。
Sensors (Basel). 2018 Nov 19;18(11):4036. doi: 10.3390/s18114036.
9
Plane-Aided Visual-Inertial Odometry for 6-DOF Pose Estimation of a Robotic Navigation Aid.用于机器人导航辅助设备6自由度位姿估计的平面辅助视觉惯性里程计
IEEE Access. 2020;8:90042-90051. doi: 10.1109/access.2020.2994299. Epub 2020 May 12.
10
Robust Stereo Visual-Inertial Odometry Using Nonlinear Optimization.基于非线性优化的鲁棒立体视觉惯性里程计
Sensors (Basel). 2019 Aug 29;19(17):3747. doi: 10.3390/s19173747.

引用本文的文献

1
Uncontrolled Two-Step Iterative Calibration Algorithm for Lidar-IMU System.激光雷达-惯性测量单元系统的无控制两步迭代标定算法。
Sensors (Basel). 2023 Mar 14;23(6):3119. doi: 10.3390/s23063119.
2
Optimization-Based Online Initialization and Calibration of Monocular Visual-Inertial Odometry Considering Spatial-Temporal Constraints.基于优化的考虑时空约束的单目视觉惯性里程计在线初始化与校准
Sensors (Basel). 2021 Apr 10;21(8):2673. doi: 10.3390/s21082673.

本文引用的文献

1
Direct Sparse Odometry.直接稀疏里程计。
IEEE Trans Pattern Anal Mach Intell. 2018 Mar;40(3):611-625. doi: 10.1109/TPAMI.2017.2658577. Epub 2017 Apr 12.