Zhang Chen, Hu Yu
College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China.
Sensors (Basel). 2017 Oct 1;17(10):2260. doi: 10.3390/s17102260.
Given a stream of depth images with a known cuboid reference object present in the scene, we propose a novel approach for accurate camera tracking and volumetric surface reconstruction in real-time. Our contribution in this paper is threefold: (a) utilizing a priori knowledge of the precisely manufactured cuboid reference object, we keep drift-free camera tracking without explicit global optimization; (b) we improve the fineness of the volumetric surface representation by proposing a prediction-corrected data fusion strategy rather than a simple moving average, which enables accurate reconstruction of high-frequency details such as the sharp edges of objects and geometries of high curvature; (c) we introduce a benchmark dataset CU3D that contains both synthetic and real-world scanning sequences with ground-truth camera trajectories and surface models for the quantitative evaluation of 3D reconstruction algorithms. We test our algorithm on our dataset and demonstrate its accuracy compared with other state-of-the-art algorithms. We release both our dataset and code as open-source (https://github.com/zhangxaochen/CuFusion) for other researchers to reproduce and verify our results.
给定一系列深度图像,且场景中存在已知的长方体参考物体,我们提出了一种新颖的方法,可实时进行精确的相机跟踪和体积表面重建。本文的贡献主要有三点:(a) 利用精确制造的长方体参考物体的先验知识,无需进行显式全局优化即可实现无漂移相机跟踪;(b) 我们提出了一种预测校正数据融合策略,而非简单的移动平均,从而提高了体积表面表示的精细度,能够精确重建高频细节,如物体的锐利边缘和高曲率几何形状;(c) 我们引入了一个基准数据集CU3D,其中包含合成和真实世界扫描序列以及地面真值相机轨迹和表面模型,用于对3D重建算法进行定量评估。我们在数据集上测试了算法,并与其他现有最先进算法相比展示了其准确性。我们将数据集和代码作为开源发布 (https://github.com/zhangxaochen/CuFusion),供其他研究人员重现和验证我们的结果。