Liu Jing, Li Chunpeng, Fan Xuefeng, Wang Zhaoqi
The Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute ofComputing Technology, Chinese Academy of Sciences, No.6 Kexueyuan South Road Zhongguancun, Haidian District, Beijing 100190, China.
University of Chinese Academy of Sciences, No.19A Yuquan Road, Beijing 100049, China.
Sensors (Basel). 2015 Aug 21;15(8):20894-924. doi: 10.3390/s150820894.
Depth estimation is a classical problem in computer vision, which typically relies on either a depth sensor or stereo matching alone. The depth sensor provides real-time estimates in repetitive and textureless regions where stereo matching is not effective. However, stereo matching can obtain more accurate results in rich texture regions and object boundaries where the depth sensor often fails. We fuse stereo matching and the depth sensor using their complementary characteristics to improve the depth estimation. Here, texture information is incorporated as a constraint to restrict the pixel's scope of potential disparities and to reduce noise in repetitive and textureless regions. Furthermore, a novel pseudo-two-layer model is used to represent the relationship between disparities in different pixels and segments. It is more robust to luminance variation by treating information obtained from a depth sensor as prior knowledge. Segmentation is viewed as a soft constraint to reduce ambiguities caused by under- or over-segmentation. Compared to the average error rate 3.27% of the previous state-of-the-art methods, our method provides an average error rate of 2.61% on the Middlebury datasets, which shows that our method performs almost 20% better than other "fused" algorithms in the aspect of precision.
深度估计是计算机视觉中的一个经典问题,通常仅依赖深度传感器或立体匹配。深度传感器在立体匹配无效的重复且无纹理区域提供实时估计。然而,立体匹配在深度传感器经常失效的丰富纹理区域和物体边界处可获得更准确的结果。我们利用立体匹配和深度传感器的互补特性进行融合,以改进深度估计。在此,纹理信息被用作一种约束,以限制像素潜在视差的范围,并减少重复且无纹理区域中的噪声。此外,一种新颖的伪双层模型用于表示不同像素和片段中视差之间的关系。通过将从深度传感器获得的信息视为先验知识,它对亮度变化更具鲁棒性。分割被视为一种软约束,以减少由欠分割或过分割引起的模糊性。与先前最先进方法的平均错误率3.27%相比,我们的方法在Middlebury数据集上的平均错误率为2.61%,这表明我们的方法在精度方面比其他“融合”算法表现几乎好20%。