Oliveira Miguel, Lim Gi-Hyun, Madeira Tiago, Dias Paulo, Santos Vítor
Institute of Electronics and Informatics Engineering of Aveiro, University of Aveiro, 3810-193 Aveiro, Portugal.
Department of Mechanical Engineering, University of Aveiro, 3810-193 Aveiro, Portugal.
Sensors (Basel). 2021 May 7;21(9):3248. doi: 10.3390/s21093248.
The creation of a textured 3D mesh from a set of RGD-D images often results in textured meshes that yield unappealing visual artifacts. The main cause is the misalignments between the RGB-D images due to inaccurate camera pose estimations. While there are many works that focus on improving those estimates, the fact is that this is a cumbersome problem, in particular due to the accumulation of pose estimation errors. In this work, we conjecture that camera poses estimation methodologies will always display non-neglectable errors. Hence, the need for more robust texture mapping methodologies, capable of producing quality textures even in considerable camera misalignments scenarios. To this end, we argue that use of the depth data from RGB-D images can be an invaluable help to confer such robustness to the texture mapping process. Results show that the complete texture mapping procedure proposed in this paper is able to significantly improve the quality of the produced textured 3D meshes.
从一组RGB-D图像创建带纹理的3D网格通常会导致生成的带纹理网格产生不美观的视觉伪影。主要原因是由于相机姿态估计不准确,RGB-D图像之间存在错位。虽然有许多工作专注于改进这些估计,但事实是,这是一个棘手的问题,特别是由于姿态估计误差的累积。在这项工作中,我们推测相机姿态估计方法总会显示出不可忽视的误差。因此,需要更强大的纹理映射方法,即使在相机严重错位的情况下也能生成高质量的纹理。为此,我们认为使用RGB-D图像中的深度数据可以为纹理映射过程赋予这种鲁棒性提供宝贵帮助。结果表明,本文提出的完整纹理映射过程能够显著提高生成的带纹理3D网格的质量。