Ieng Sio-Hoi, Carneiro João, Benosman Ryad B
Institut National de la Santé et de la Recherche Médicale, UMRI S 968; Sorbonne Université, University of Pierre and Marie Curie, Univ Paris 06, UMR S 968; Centre National de la Recherche Scientifique, UMR 7210, Institut de la Vision Paris, France.
Front Neurosci. 2017 Feb 6;10:596. doi: 10.3389/fnins.2016.00596. eCollection 2016.
State of the art scene flow estimation techniques are based on projections of the 3D motion on image using luminance-sampled at the frame rate of the cameras-as the principal source of information. We introduce in this paper a pure time based approach to estimate the flow from 3D point clouds primarily output by neuromorphic event-based stereo camera rigs, or by any existing 3D depth sensor even if it does not provide nor use luminance. This method formulates the scene flow problem by applying a local piecewise regularization of the scene flow. The formulation provides a unifying framework to estimate scene flow from synchronous and asynchronous 3D point clouds. It relies on the properties of 4D space time using a decomposition into its subspaces. This method naturally exploits the properties of the neuromorphic asynchronous event based vision sensors that allows continuous time 3D point clouds reconstruction. The approach can also handle the motion of deformable object. Experiments using different 3D sensors are presented.
当前最先进的场景流估计技术基于使用以相机帧率采样的亮度作为主要信息源,将3D运动投影到图像上。在本文中,我们介绍一种基于纯时间的方法,用于从主要由基于神经形态事件的立体相机装置输出的3D点云估计流,或者从任何现有的3D深度传感器估计流,即使该传感器不提供也不使用亮度。该方法通过对场景流应用局部分段正则化来阐述场景流问题。该公式提供了一个统一的框架,用于从同步和异步3D点云估计场景流。它利用4D时空的属性,将其分解为子空间。该方法自然地利用了基于神经形态异步事件的视觉传感器的属性,这些属性允许连续时间3D点云重建。该方法还可以处理可变形物体的运动。本文展示了使用不同3D传感器的实验。