Sangineto Enver, Nabi Moin, Culibrk Dubravko, Sebe Nicu
IEEE Trans Pattern Anal Mach Intell. 2019 Mar;41(3):712-725. doi: 10.1109/TPAMI.2018.2804907. Epub 2018 Feb 12.
In a weakly-supervised scenario object detectors need to be trained using image-level annotation alone. Since bounding-box-level ground truth is not available, most of the solutions proposed so far are based on an iterative, Multiple Instance Learning framework in which the current classifier is used to select the highest-confidence boxes in each image, which are treated as pseudo-ground truth in the next training iteration. However, the errors of an immature classifier can make the process drift, usually introducing many of false positives in the training dataset. To alleviate this problem, we propose in this paper a training protocol based on the self-paced learning paradigm. The main idea is to iteratively select a subset of images and boxes that are the most reliable, and use them for training. While in the past few years similar strategies have been adopted for SVMs and other classifiers, we are the first showing that a self-paced approach can be used with deep-network-based classifiers in an end-to-end training pipeline. The method we propose is built on the fully-supervised Fast-RCNN architecture and can be applied to similar architectures which represent the input image as a bag of boxes. We show state-of-the-art results on Pascal VOC 2007, Pascal VOC 2010 and ILSVRC 2013. On ILSVRC 2013 our results based on a low-capacity AlexNet network outperform even those weakly-supervised approaches which are based on much higher-capacity networks.
在弱监督场景中,目标检测器需要仅使用图像级注释进行训练。由于没有边界框级的真实标注,目前提出的大多数解决方案都基于迭代的多实例学习框架,在该框架中,当前分类器用于在每张图像中选择置信度最高的框,这些框在下一次训练迭代中被视为伪真实标注。然而,不成熟分类器的错误会导致该过程产生偏差,通常会在训练数据集中引入许多误报。为了缓解这个问题,我们在本文中提出了一种基于自步学习范式的训练协议。主要思想是迭代地选择最可靠的图像和框的子集,并将它们用于训练。虽然在过去几年中,支持向量机和其他分类器也采用了类似的策略,但我们是第一个表明自步方法可以在端到端训练管道中与基于深度网络的分类器一起使用的。我们提出的方法基于全监督的快速区域卷积神经网络(Fast-RCNN)架构,并且可以应用于将输入图像表示为框袋的类似架构。我们在Pascal VOC 2007数据集、Pascal VOC 2010数据集和ILSVRC 2013数据集上展示了领先的结果。在ILSVRC 2013数据集上,我们基于低容量AlexNet网络的结果甚至超过了那些基于高得多容量网络的弱监督方法。