IEEE Trans Pattern Anal Mach Intell. 2016 Oct;38(10):2024-39. doi: 10.1109/TPAMI.2015.2505283. Epub 2015 Dec 3.
In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.
在本文中,我们解决了从单目图像进行深度估计的问题。与使用多个图像(如立体深度感知)进行深度估计相比,从单目图像进行深度估计更具挑战性。之前的工作通常侧重于利用几何先验或其他信息源,大多数使用手工制作的特征。最近,越来越多的证据表明,来自深度卷积神经网络(CNN)的特征为各种视觉应用创造了新的记录。另一方面,考虑到深度值的连续性,深度估计可以自然地表述为连续条件随机场(CRF)学习问题。因此,这里我们提出了一种从单目图像估计深度的深度卷积神经场模型,旨在共同探索深度 CNN 和连续 CRF 的能力。具体来说,我们提出了一种深度结构学习方案,在统一的深度 CNN 框架中学习连续 CRF 的一元和二元势。然后,我们进一步提出了一种基于全卷积网络和新颖的超像素池化方法的等效有效模型,该模型的速度大约快 10 倍,以加速深度模型中的补丁卷积。通过这个更有效的模型,我们能够设计更深的网络以追求更好的性能。我们提出的方法可以用于没有几何先验或任何额外信息注入的一般场景的深度估计。在我们的案例中,可以通过闭式形式计算配分函数的积分,从而可以精确地求解对数似然最大化。此外,由于存在闭式解,预测测试图像深度的推断问题的效率非常高。在室内和室外场景数据集上的实验表明,所提出的方法优于最新的深度估计方法。