Chahl J S, Srinivasan M V
Centre for Visual Sciences, Research School of Biological Sciences, Australian National University, Canberra, Australia.
Biol Cybern. 1996 May;74(5):405-11. doi: 10.1007/BF00206707.
A novel technique is presented for the computation of the parameters of egomotion of a mobile device, such as a robot or a mechanical arm, equipped with two visual sensors. Each sensor captures a panoramic view of the environment. We show the parameters of ego-motion can be computed by interpolating the position of the image captured by one of the sensors at the robot's present location, with respect to the images captured by the two sensors at the robot's previous location. The algorithm delivers the distance travelled and angle rotated, without the explicit measurement or integration of velocity fields. The result is obtained in a single step, without any iteration or successive approximation. Tests of the algorithm on real and synthetic images reveal an accuracy to within 5% of the actual motion. Implementation of the algorithm on a mobile robot reveals that stepwise rotation and translation can be measured to within 10% accuracy in a three-dimensional world of unknown structure. The position and orientation of the robot at the end of a 30-step trajectory can be estimated with accuracies of 5% and 5 degrees, respectively.
本文提出了一种新颖的技术,用于计算配备两个视觉传感器的移动设备(如机器人或机械臂)的自我运动参数。每个传感器捕获环境的全景视图。我们表明,通过将机器人当前位置处一个传感器捕获的图像位置,相对于机器人先前位置处两个传感器捕获的图像进行插值,可以计算出自我运动的参数。该算法无需明确测量或积分速度场即可得出行进的距离和旋转的角度。结果在一步中获得,无需任何迭代或逐次逼近。在真实图像和合成图像上对该算法进行测试,结果显示其精度在实际运动的5%以内。在移动机器人上实现该算法表明,在未知结构的三维世界中,逐步旋转和平移的测量精度可达10%。在30步轨迹结束时,机器人的位置和方向估计精度分别可达5%和5度。