Suppr超能文献

用于时间一致超像素的遮挡感知方法

Occlusion-aware Method for Temporally Consistent Superpixels.

作者信息

Reso Matthias, Jachalsky Jorn, Rosenhahn Bodo, Ostermann Joern

出版信息

IEEE Trans Pattern Anal Mach Intell. 2018 May 3. doi: 10.1109/TPAMI.2018.2832628.

Abstract

A wide variety of computer vision applications rely on superpixel or supervoxel algorithms as a preprocessing step. This underlines the overall importance that these approaches have gained in recent years. However, most methods show a lack of temporal consistency or fail in producing temporally stable superpixels. In this paper, we present an approach to generate temporally consistent superpixels for video content. Our method is formulated as a contour-evolving expectation-maximization framework, which utilizes an efficient label propagation scheme to encourage the preservation of superpixel shapes and their relative positioning over time. By explicitly detecting the occlusion of superpixels and the disocclusion of new image regions, our framework is able to terminate and create superpixels whose corresponding image region becomes hidden or newly appears. Additionally, the occluded parts of superpixels are incorporated in the further optimization. This increases the compliance of the superpixel flow with the optical flow present in the scene. Using established benchmark suites, we show the performance of our approach in comparison to state-of-the-art supervoxel and superpixel algorithms for video content.

摘要

各种各样的计算机视觉应用都依赖超像素或超体素算法作为预处理步骤。这突出了这些方法近年来所获得的整体重要性。然而,大多数方法缺乏时间一致性,或者无法生成时间上稳定的超像素。在本文中,我们提出了一种为视频内容生成时间上一致的超像素的方法。我们的方法被制定为一个轮廓演化期望最大化框架,该框架利用一种有效的标签传播方案来鼓励超像素形状及其相对位置随时间的保留。通过明确检测超像素的遮挡和新图像区域的解遮挡,我们的框架能够终止并创建其相应图像区域变为隐藏或新出现的超像素。此外,超像素的遮挡部分被纳入进一步的优化中。这增加了超像素流与场景中存在的光流的一致性。使用既定的基准套件,我们展示了我们的方法与用于视频内容的最新超体素和超像素算法相比的性能。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验