Department of Computer Engineering and Automation, Federal University of Rio Grande do Norte, Natal, RN 59.078-970, Brazil.
Department of Computer Science, State University of Rio Grande do Norte, Natal, RN, 59104-200 Brazil.
Sensors (Basel). 2018 Jul 16;18(7):2302. doi: 10.3390/s18072302.
Technological innovations in the hardware of RGB-D sensors have allowed the acquisition of 3D point clouds in real time. Consequently, various applications have arisen related to the 3D world, which are receiving increasing attention from researchers. Nevertheless, one of the main problems that remains is the demand for computationally intensive processing that required optimized approaches to deal with 3D vision modeling, especially when it is necessary to perform tasks in real time. A previously proposed multi-resolution 3D model known as foveated point clouds can be a possible solution to this problem. Nevertheless, this is a model limited to a single foveated structure with context dependent mobility. In this work, we propose a new solution for data reduction and feature detection using multifoveation in the point cloud. Nonetheless, the application of several foveated structures results in a considerable increase of processing since there are intersections between regions of distinct structures, which are processed multiple times. Towards solving this problem, the current proposal brings an approach that avoids the processing of redundant regions, which results in even more reduced processing time. Such approach can be used to identify objects in 3D point clouds, one of the key tasks for real-time applications as robotics vision, with efficient synchronization allowing the validation of the model and verification of its applicability in the context of computer vision. Experimental results demonstrate a performance gain of at least 27.21% in processing time while retaining the main features of the original, and maintaining the recognition quality rate in comparison with state-of-the-art 3D object recognition methods.
RGB-D 传感器硬件方面的技术创新使得实时获取三维点云成为可能。因此,与三维世界相关的各种应用应运而生,这些应用正受到研究人员的越来越多的关注。然而,一个仍然存在的主要问题是对计算密集型处理的需求,这需要优化方法来处理三维视觉建模,特别是在需要实时执行任务时。以前提出的一种名为焦散点云的多分辨率三维模型可以是解决此问题的一种可能的解决方案。然而,这是一个仅限于具有上下文相关移动性的单一焦散结构的模型。在这项工作中,我们提出了一种新的解决方案,用于使用点云中的多焦点来减少数据和检测特征。尽管如此,由于不同结构的区域之间存在交叉,需要多次处理,因此应用多个焦散结构会导致处理量显著增加。为了解决这个问题,目前的提案提出了一种避免处理冗余区域的方法,这会进一步减少处理时间。这种方法可用于识别三维点云中的物体,这是机器人视觉等实时应用的关键任务之一,通过有效的同步可以验证模型并验证其在计算机视觉中的适用性。实验结果表明,在保留原始主要特征的同时,处理时间至少提高了 27.21%,并且与最先进的三维物体识别方法相比,保持了识别质量率。