Instituto Nacional de Astrofísica, Óptica y Electrónica (INAOE), Tonantzintla 72840, Mexico.
Institut Pascal, Université Clermont Auvergne (UCA), 63178 Clermont-Ferrand, France.
Sensors (Basel). 2018 Dec 23;19(1):53. doi: 10.3390/s19010053.
Applications such as autonomous navigation, robot vision, and autonomous flying require depth map information of a scene. Depth can be estimated by using a single moving camera (depth from motion). However, the traditional depth from motion algorithms have low processing speeds and high hardware requirements that limit the embedded capabilities. In this work, we propose a hardware architecture for depth from motion that consists of a flow/depth transformation and a new optical flow algorithm. Our optical flow formulation consists in an extension of the stereo matching problem. A pixel-parallel/window-parallel approach where a correlation function based on the sum of absolute difference (SAD) computes the optical flow is proposed. Further, in order to improve the SAD, the curl of the intensity gradient as a preprocessing step is proposed. Experimental results demonstrated that it is possible to reach higher accuracy (90% of accuracy) compared with previous Field Programmable Gate Array (FPGA)-based optical flow algorithms. For the depth estimation, our algorithm delivers dense maps with motion and depth information on all image pixels, with a processing speed up to 128 times faster than that of previous work, making it possible to achieve high performance in the context of embedded applications.
应用程序,如自主导航、机器人视觉和自主飞行,需要场景的深度图信息。可以通过使用单个移动摄像机(运动中的深度)来估计深度。然而,传统的运动深度算法处理速度低,硬件要求高,限制了嵌入式能力。在这项工作中,我们提出了一种用于运动深度的硬件架构,它由流/深度变换和新的光流算法组成。我们的光流公式由立体匹配问题的扩展组成。提出了一种基于绝对差和(SAD)的相关函数的像素并行/窗口并行方法来计算光流。此外,为了提高 SAD,提出了将强度梯度的旋度作为预处理步骤。实验结果表明,与以前基于现场可编程门阵列(FPGA)的光流算法相比,它可以达到更高的精度(90%的精度)。对于深度估计,我们的算法在所有图像像素上提供具有运动和深度信息的密集图,处理速度比以前的工作快 128 倍,这使得在嵌入式应用中实现高性能成为可能。