Suppr超能文献

协同自引导显著性

Co-Bootstrapping Saliency.

出版信息

IEEE Trans Image Process. 2017 Jan;26(1):414-425. doi: 10.1109/TIP.2016.2627804. Epub 2016 Nov 11.

Abstract

In this paper, we propose a visual saliency detection algorithm to explore the fusion of various saliency models in a manner of bootstrap learning. First, an original bootstrapping model, which combines both weak and strong saliency models, is constructed. In this model, image priors are exploited to generate an original weak saliency model, which provides training samples for a strong model. Then, a strong classifier is learned based on the samples extracted from the weak model. We use this classifier to classify all the salient and non-salient superpixels in an input image. To further improve the detection performance, multi-scale saliency maps of weak and strong model are integrated, respectively. The final result is the combination of the weak and strong saliency maps. The original model indicates that the overall performance of the proposed algorithm is largely affected by the quality of weak saliency model. Therefore, we propose a co-bootstrapping mechanism, which integrates the advantages of different saliency methods to construct the weak saliency model thus addresses the problem and achieves a better performance. Extensive experiments on benchmark data sets demonstrate that the proposed algorithm outperforms the state-of-the-art methods.

摘要

在本文中,我们提出了一种视觉显著性检测算法,以自训练学习的方式探索各种显著性模型的融合。首先,构建一个原始的自训练模型,该模型结合了弱显著性模型和强显著性模型。在这个模型中,利用图像先验来生成一个原始的弱显著性模型,该模型为强模型提供训练样本。然后,基于从弱模型中提取的样本学习一个强分类器。我们使用这个分类器对输入图像中的所有显著和非显著超像素进行分类。为了进一步提高检测性能,分别对弱模型和强模型的多尺度显著性图进行整合。最终结果是弱显著性图和强显著性图的组合。原始模型表明,所提算法的整体性能在很大程度上受弱显著性模型质量的影响。因此,我们提出了一种协同自训练机制,该机制整合了不同显著性方法的优点来构建弱显著性模型,从而解决了该问题并取得了更好的性能。在基准数据集上进行的大量实验表明,所提算法优于现有方法。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验