Suppr超能文献

视觉惯性里程计框架中基于事件的特征跟踪

Event-based feature tracking in a visual inertial odometry framework.

作者信息

Ribeiro-Gomes José, Gaspar José, Bernardino Alexandre

机构信息

Instituto Superior Técnico, University of Lisbon, Lisbon, Portugal.

出版信息

Front Robot AI. 2023 Feb 14;10:994488. doi: 10.3389/frobt.2023.994488. eCollection 2023.

Abstract

Event cameras report pixel-wise brightness changes at high temporal resolutions, allowing for high speed tracking of features in visual inertial odometry (VIO) estimation, but require a paradigm shift, as common practices from the past decades using conventional cameras, such as feature detection and tracking, do not translate directly. One method for feature detection and tracking is the Eventbased Kanade-Lucas-Tomasi tracker (EKLT), an hybrid approach that combines frames with events to provide a high speed tracking of features. Despite the high temporal resolution of the events, the local nature of the registration of features imposes conservative limits to the camera motion speed. Our proposed approach expands on EKLT by relying on the concurrent use of the event-based feature tracker with a visual inertial odometry system performing pose estimation, leveraging frames, events and Inertial Measurement Unit (IMU) information to improve tracking. The problem of temporally combining high-rate IMU information with asynchronous event cameras is solved by means of an asynchronous probabilistic filter, in particular an Unscented Kalman Filter (UKF). The proposed method of feature tracking based on EKLT takes into account the state estimation of the pose estimator running in parallel and provides this information to the feature tracker, resulting in a synergy that can improve not only the feature tracking, but also the pose estimation. This approach can be seen as a feedback, where the state estimation of the filter is fed back into the tracker, which then produces visual information for the filter, creating a "closed loop". The method is tested on rotational motions only, and comparisons between a conventional (not event-based) approach and the proposed approach are made, using synthetic and real datasets. Results support that the use of events for the task improve performance. To the best of our knowledge, this is the first work proposing the fusion of visual with inertial information using events cameras by means of an UKF, as well as the use of EKLT in the context of pose estimation. Furthermore, our closed loop approach proved to be an improvement over the base EKLT, resulting in better feature tracking and pose estimation. The inertial information, despite prone to drifting over time, allows keeping track of the features that would otherwise be lost. Then, feature tracking synergically helps estimating and minimizing the drift.

摘要

事件相机以高时间分辨率报告逐像素的亮度变化,这使得在视觉惯性里程计(VIO)估计中能够对特征进行高速跟踪,但这需要一种范式转变,因为过去几十年使用传统相机的常见做法,如特征检测和跟踪,不能直接应用。一种特征检测和跟踪方法是基于事件的Kanade-Lucas-Tomasi跟踪器(EKLT),这是一种将帧与事件相结合的混合方法,以提供对特征的高速跟踪。尽管事件具有高时间分辨率,但特征配准的局部性对相机运动速度施加了保守限制。我们提出的方法在EKLT的基础上进行了扩展,它同时使用基于事件的特征跟踪器和执行姿态估计的视觉惯性里程计系统,利用帧、事件和惯性测量单元(IMU)信息来改进跟踪。通过异步概率滤波器,特别是无迹卡尔曼滤波器(UKF),解决了将高速率IMU信息与异步事件相机进行时间上融合的问题。所提出的基于EKLT的特征跟踪方法考虑了并行运行的姿态估计器的状态估计,并将此信息提供给特征跟踪器,从而产生一种协同作用,不仅可以改善特征跟踪,还可以改善姿态估计。这种方法可以看作是一种反馈,即滤波器的状态估计反馈到跟踪器中,跟踪器然后为滤波器生成视觉信息,从而创建一个“闭环”。该方法仅在旋转运动上进行了测试,并使用合成数据集和真实数据集对传统(非基于事件)方法和所提出的方法进行了比较。结果表明,使用事件执行该任务可提高性能。据我们所知,这是第一项提出通过UKF利用事件相机将视觉信息与惯性信息融合的工作,以及在姿态估计背景下使用EKLT的工作。此外,我们的闭环方法被证明比基础EKLT有所改进,从而实现了更好的特征跟踪和姿态估计。尽管惯性信息容易随时间漂移,但它能够跟踪那些否则会丢失的特征。然后,特征跟踪协同有助于估计并最小化漂移。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/017e/9971716/6f407c805422/frobt-10-994488-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验