Suppr超能文献

基于高效混合树的立体匹配及其在后期重聚焦图像中的应用。

Efficient hybrid tree-based stereo matching with applications to postcapture image refocusing.

出版信息

IEEE Trans Image Process. 2014 Aug;23(8):3428-42. doi: 10.1109/TIP.2014.2329389. Epub 2014 Jun 5.

Abstract

Estimating dense correspondence or depth information from a pair of stereoscopic images is a fundamental problem in computer vision, which finds a range of important applications. Despite intensive past research efforts in this topic, it still remains challenging to recover the depth information both reliably and efficiently, especially when the input images contain weakly textured regions or are captured under uncontrolled, real-life conditions. Striking a desired balance between computational efficiency and estimation quality, a hybrid minimum spanning tree-based stereo matching method is proposed in this paper. Our method performs efficient nonlocal cost aggregation at pixel-level and region-level, and then adaptively fuses the resulting costs together to leverage their respective strength in handling large textureless regions and fine depth discontinuities. Experiments on the standard Middlebury stereo benchmark show that the proposed stereo method outperforms all prior local and nonlocal aggregation-based methods, achieving particularly noticeable improvements for low texture regions. To further demonstrate the effectiveness of the proposed stereo method, also motivated by the increasing desire to generate expressive depth-induced photo effects, this paper is tasked next to address the emerging application of interactive depth-of-field rendering given a real-world stereo image pair. To this end, we propose an accurate thin-lens model for synthetic depth-of-field rendering, which considers the user-stroke placement and camera-specific parameters and performs the pixel-adapted Gaussian blurring in a principled way. Taking ~1.5 s to process a pair of 640×360 images in the off-line step, our system named Scribble2focus allows users to interactively select in-focus regions by simple strokes using the touch screen and returns the synthetically refocused images instantly to the user.

摘要

从一对立体图像估计密集对应或深度信息是计算机视觉中的一个基本问题,它有广泛的重要应用。尽管在这个主题上过去有大量的研究工作,但仍然具有挑战性可靠和有效地恢复深度信息,特别是当输入图像包含弱纹理区域或在不受控制的现实生活条件下捕获时。为了在计算效率和估计质量之间取得理想的平衡,本文提出了一种基于混合最小生成树的立体匹配方法。我们的方法在像素级和区域级执行有效的非局部代价聚合,然后自适应地融合得到的代价,以利用它们在处理大纹理区域和精细深度不连续方面的各自优势。在标准 Middlebury 立体基准测试上的实验表明,所提出的立体方法优于所有先前的局部和非局部聚合方法,特别是对于低纹理区域有显著的改进。为了进一步证明所提出的立体方法的有效性,并且受到生成富有表现力的深度感应照片效果的需求的推动,本文接下来的任务是解决给定一对真实立体图像的新兴应用,即交互式景深渲染。为此,我们提出了一种用于合成景深渲染的精确薄透镜模型,该模型考虑了用户笔画的位置和相机特定的参数,并以一种有原则的方式执行像素自适应的高斯模糊。我们的系统名为 Scribble2focus,在离线步骤中处理一对 640×360 的图像大约需要 1.5 秒,它允许用户通过使用触摸屏进行简单的笔画来交互选择焦点区域,并立即将合成重新聚焦的图像返回给用户。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验