Suppr超能文献

面向上下文感知视觉类别发现的目标图。

Object-graphs for context-aware visual category discovery.

机构信息

Department of Electrical and Computer Engineering, The University of Texas at Austin, ACES 3.302, 1 University Station C0803, Austin, TX 78712, USA.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2012 Feb;34(2):346-58. doi: 10.1109/TPAMI.2011.122.

Abstract

How can knowing about some categories help us to discover new ones in unlabeled images? Unsupervised visual category discovery is useful to mine for recurring objects without human supervision, but existing methods assume no prior information and thus tend to perform poorly for cluttered scenes with multiple objects. We propose to leverage knowledge about previously learned categories to enable more accurate discovery, and address challenges in estimating their familiarity in unsegmented, unlabeled images. We introduce two variants of a novel object-graph descriptor to encode the 2D and 3D spatial layout of object-level co-occurrence patterns relative to an unfamiliar region and show that by using them to model the interaction between an image’s known and unknown objects, we can better detect new visual categories. Rather than mine for all categories from scratch, our method identifies new objects while drawing on useful cues from familiar ones. We evaluate our approach on several benchmark data sets and demonstrate clear improvements in discovery over conventional purely appearance-based baselines.

摘要

了解某些类别如何帮助我们在未标记的图像中发现新类别?无监督视觉类别发现对于在没有人工监督的情况下挖掘重复对象非常有用,但现有方法假设没有先验信息,因此对于包含多个对象的杂乱场景表现不佳。我们建议利用先前学习到的类别的知识来实现更准确的发现,并解决在未分割、未标记的图像中估计它们的熟悉度的挑战。我们引入了两种新的物体图描述符的变体,以编码相对于不熟悉区域的物体级共现模式的 2D 和 3D 空间布局,并表明通过使用它们来模拟图像的已知和未知物体之间的相互作用,我们可以更好地检测新的视觉类别。我们的方法不是从零开始挖掘所有类别,而是在利用熟悉类别的有用线索的同时识别新对象。我们在几个基准数据集上评估了我们的方法,并在发现方面明显优于传统的纯基于外观的基线。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验