IEEE Trans Pattern Anal Mach Intell. 2011 Jul;33(7):1400-14. doi: 10.1109/TPAMI.2010.172. Epub 2010 Sep 9.
Time-of-flight range sensors have error characteristics, which are complementary to passive stereo. They provide real-time depth estimates in conditions where passive stereo does not work well, such as on white walls. In contrast, these sensors are noisy and often perform poorly on the textured scenes where stereo excels. We explore their complementary characteristics and introduce a method for combining the results from both methods that achieve better accuracy than either alone. In our fusion framework, the depth probability distribution functions from each of these sensor modalities are formulated and optimized. Robust and adaptive fusion is built on a pixel-wise reliability weighting function calculated for each method. In addition, since time-of-flight devices have primarily been used as individual sensors, they are typically poorly calibrated. We introduce a method that substantially improves upon the manufacturer's calibration. We demonstrate that our proposed techniques lead to improved accuracy and robustness on an extensive set of experimental results.
飞行时间距离传感器具有与被动立体视觉互补的误差特性。在被动立体视觉无法正常工作的情况下,例如在白色墙壁上,它们可以提供实时深度估计。相比之下,这些传感器噪声较大,并且在立体视觉表现出色的纹理场景中通常表现不佳。我们探索了它们的互补特性,并引入了一种结合两种方法结果的方法,该方法比单独使用任何一种方法的准确性都高。在我们的融合框架中,从这两种传感器模式中的每一种都构建并优化了深度概率分布函数。稳健且自适应的融合是基于为每种方法计算的逐像素可靠性加权函数构建的。此外,由于飞行时间设备主要用作单个传感器,因此它们的校准通常很差。我们引入了一种方法,可以大大改进制造商的校准。我们在广泛的实验结果中证明了我们提出的技术可以提高准确性和鲁棒性。