Mann Richard, Langer Michael S
School of Computer Science, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada.
J Opt Soc Am A Opt Image Sci Vis. 2005 Sep;22(9):1717-31. doi: 10.1364/josaa.22.001717.
Previous methods for estimating observer motion in a rigid 3D scene assume that image velocities can be measured at isolated points. When the observer is moving through a cluttered 3D scene such as a forest, however, pointwise measurements of image velocity are more challenging to obtain because multiple depths, and hence multiple velocities, are present in most local image regions. We introduce a method for estimating egomotion that avoids pointwise image velocity estimation as a first step. In its place, the direction of motion parallax in local image regions is estimated, using a spectrum-based method, and these directions are then combined to directly estimate 3D observer motion. There are two advantages to this approach. First, the method can be applied to a wide range of 3D cluttered scenes, including those for which pointwise image velocities cannot be measured because only normal velocity information is available. Second, the egomotion estimates can be used as a posterior constraint on estimating pointwise image velocities, since known egomotion parameters constrain the candidate image velocities at each point to a one-dimensional rather than a two-dimensional space.
先前用于估计刚性三维场景中观察者运动的方法假定图像速度可在孤立点处测量。然而,当观察者在诸如森林之类的杂乱三维场景中移动时,由于大多数局部图像区域中存在多个深度以及因此多个速度,所以逐点测量图像速度更具挑战性。我们引入一种用于估计自我运动的方法,该方法首先避免逐点图像速度估计。取而代之的是,使用基于频谱的方法估计局部图像区域中的运动视差方向,然后将这些方向组合起来直接估计三维观察者运动。这种方法有两个优点。第一,该方法可应用于广泛的三维杂乱场景,包括那些由于仅可获得法线速度信息而无法测量逐点图像速度的场景。第二,自我运动估计可作为估计逐点图像速度的后验约束,因为已知的自我运动参数将每个点处的候选图像速度约束到一维而非二维空间。