Suppr超能文献

在视觉搜索中,物体属性以相加的方式组合。

Object attributes combine additively in visual search.

作者信息

Pramod R T, Arun S P

出版信息

J Vis. 2016;16(5):8. doi: 10.1167/16.5.8.

Abstract

We perceive objects as containing a variety of attributes: local features, relations between features, internal details, and global properties. But we know little about how they combine. Here, we report a remarkably simple additive rule that governs how these diverse object attributes combine in vision. The perceived dissimilarity between two objects was accurately explained as a sum of (a) spatially tuned local contour-matching processes modulated by part decomposition; (b) differences in internal details, such as texture; (c) differences in emergent attributes, such as symmetry; and (d) differences in global properties, such as orientation or overall configuration of parts. Our results elucidate an enduring question in object vision by showing that the whole object is not a sum of its parts but a sum of its many attributes.

摘要

我们将物体视为包含各种属性

局部特征、特征之间的关系、内部细节和全局属性。但我们对它们如何组合却知之甚少。在此,我们报告了一条非常简单的加法规则,该规则支配着这些不同的物体属性在视觉中如何组合。两个物体之间感知到的差异可以准确地解释为以下各项之和:(a) 由部分分解调制的空间调谐局部轮廓匹配过程;(b) 内部细节(如纹理)的差异;(c) 涌现属性(如对称性)的差异;以及(d) 全局属性(如部分的方向或整体配置)的差异。我们的结果通过表明整个物体不是其各部分的总和,而是其众多属性的总和来阐明物体视觉中一个长期存在的问题。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2470/4790416/89bb95639fb9/i1534-7362-16-5-8-f01.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验