Suppr超能文献

使用视觉和深度传感器的三维刚体跟踪。

3-D rigid body tracking using vision and depth sensors.

出版信息

IEEE Trans Cybern. 2013 Oct;43(5):1395-405. doi: 10.1109/TCYB.2013.2272735. Epub 2013 Aug 15.

Abstract

In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes.

摘要

在机器人和增强现实应用中,通常需要基于模型的刚体 3D 跟踪。借助准确的姿势估计,可以提高整体可靠性并减少抖动。在文献中提出的许多姿势估计解决方案中,纯基于视觉的 3D 跟踪器要么需要手动初始化,要么需要离线训练阶段。另一方面,依赖纯深度传感器的跟踪器不适合 AR 应用。本文提出了一种基于扩展卡尔曼滤波器融合视觉和深度传感器的自动化 3D 跟踪算法。一种新颖的测量跟踪方案,基于使用 3D 点云的强度和形状索引图数据估计光流,显著提高了 2D 和 3D 跟踪性能。该方法既不需要手动初始化姿势,也不需要离线训练,同时能够实现高精度的 3D 跟踪。所提出的方法的准确性已经通过多种传统技术进行了测试,在误差指标方面以及呈现的场景的主观方面,都明显观察到了优越的性能。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验