Suppr超能文献

利用与视觉注意力共享的特征进行快速生物启发式场景分类。

Rapid biologically-inspired scene classification using features shared with visual attention.

作者信息

Siagian Christian, Itti Laurent

机构信息

Department of Computer Science, University of Southern California, Los Angeles 90089-2520, USA.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2007 Feb;29(2):300-12. doi: 10.1109/TPAMI.2007.40.

Abstract

We describe and validate a simple context-based scene recognition algorithm for mobile robotics applications. The system can differentiate outdoor scenes from various sites on a college campus using a multiscale set of early-visual features, which capture the "gist" of the scene into a low-dimensional signature vector. Distinct from previous approaches, the algorithm presents the advantage of being biologically plausible and of having low-computational complexity, sharing its low-level features with a model for visual attention that may operate concurrently on a robot. We compare classification accuracy using scenes filmed at three outdoor sites on campus (13,965 to 34,711 frames per site). Dividing each site into nine segments, we obtain segment classification rates between 84.21 percent and 88.62 percent. Combining scenes from all sites (75,073 frames in total) yields 86.45 percent correct classification, demonstrating the generalization and scalability of the approach.

摘要

我们描述并验证了一种用于移动机器人应用的基于上下文的简单场景识别算法。该系统可以使用多尺度的早期视觉特征集,将大学校园不同地点的室外场景区分开来,这些特征将场景的“要点”捕捉到一个低维特征向量中。与先前的方法不同,该算法具有生物学合理性且计算复杂度低的优点,它与一种可能在机器人上同时运行的视觉注意力模型共享其低级特征。我们使用在校园三个室外地点拍摄的场景(每个地点13,965至34,711帧)比较分类准确率。将每个地点划分为九个部分,我们得到的部分分类率在84.21%至88.62%之间。将所有地点的场景(总共75,073帧)组合起来,得到了86.45%的正确分类率,证明了该方法的通用性和可扩展性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验