Lin Huei-Yung, Wang Min-Liang
Department of Electrical Engineering and Advanced Institute of Manufacturing with High-Tech Innovation, National Chung Cheng University, 168 University Road, Min-Hsiung, Chiayi 621, Taiwan.
Sensors (Basel). 2014 Sep 4;14(9):16508-31. doi: 10.3390/s140916508.
In this paper, we present a framework for the hybrid omnidirectional and perspective robot vision system. Based on the hybrid imaging geometry, a generalized stereo approach is developed via the construction of virtual cameras. It is then used to rectify the hybrid image pair using the perspective projection model. The proposed method not only simplifies the computation of epipolar geometry for the hybrid imaging system, but also facilitates the stereo matching between the heterogeneous image formation. Experimental results for both the synthetic data and real scene images have demonstrated the feasibility of our approach.
在本文中,我们提出了一种用于混合全向和透视机器人视觉系统的框架。基于混合成像几何,通过构建虚拟相机开发了一种广义立体方法。然后使用该方法利用透视投影模型对混合图像对进行校正。所提出的方法不仅简化了混合成像系统的极线几何计算,还促进了异构图像形成之间的立体匹配。合成数据和真实场景图像的实验结果都证明了我们方法的可行性。