Xiong Bo, Jain Suyog Dutt, Grauman Kristen
IEEE Trans Pattern Anal Mach Intell. 2019 Nov;41(11):2677-2692. doi: 10.1109/TPAMI.2018.2865794. Epub 2018 Aug 17.
We propose an end-to-end learning framework for segmenting generic objects in both images and videos. Given a novel image or video, our approach produces a pixel-level mask for all "object-like" regions-even for object categories never seen during training. We formulate the task as a structured prediction problem of assigning an object/background label to each pixel, implemented using a deep fully convolutional network. When applied to a video, our model further incorporates a motion stream, and the network learns to combine both appearance and motion and attempts to extract all prominent objects whether they are moving or not. Beyond the core model, a second contribution of our approach is how it leverages varying strengths of training annotations. Pixel-level annotations are quite difficult to obtain, yet crucial for training a deep network approach for segmentation. Thus we propose ways to exploit weakly labeled data for learning dense foreground segmentation. For images, we show the value in mixing object category examples with image-level labels together with relatively few images with boundary-level annotations. For video, we show how to bootstrap weakly annotated videos together with the network trained for image segmentation. Through experiments on multiple challenging image and video segmentation benchmarks, our method offers consistently strong results and improves the state-of-the-art for fully automatic segmentation of generic (unseen) objects. In addition, we demonstrate how our approach benefits image retrieval and image retargeting, both of which flourish when given our high-quality foreground maps. Code, models, and videos are at: http://vision.cs.utexas.edu/projects/pixelobjectness/.
我们提出了一种端到端学习框架,用于对图像和视频中的通用对象进行分割。给定一幅新的图像或视频,我们的方法会为所有“类对象”区域生成一个像素级掩码——即使是在训练期间从未见过的对象类别。我们将该任务表述为一个结构化预测问题,即给每个像素分配一个对象/背景标签,并使用深度全卷积网络来实现。当应用于视频时,我们的模型还纳入了一个运动流,网络会学习将外观和运动结合起来,并尝试提取所有突出的对象,无论它们是否在移动。除了核心模型,我们方法的第二个贡献在于它如何利用不同强度的训练标注。像素级标注很难获得,但对于训练用于分割的深度网络方法至关重要。因此,我们提出了利用弱标注数据来学习密集前景分割的方法。对于图像,我们展示了将对象类别示例与图像级标签以及相对较少的带有边界级标注的图像混合在一起的价值。对于视频,我们展示了如何将弱标注视频与为图像分割训练的网络结合起来。通过在多个具有挑战性的图像和视频分割基准上进行实验,我们的方法始终能产生强大的结果,并改进了通用(未见)对象全自动分割的当前技术水平。此外,我们展示了我们的方法如何有益于图像检索和图像重定目标,当给定我们高质量的前景图时,这两者都能蓬勃发展。代码、模型和视频可在以下网址获取:http://vision.cs.utexas.edu/projects/pixelobjectness/ 。