Yu Chen-Ping, Maxfield Justin T, Zelinsky Gregory J
Department of Computer Science.
Department of Psychology, Stony Brook University.
Psychol Sci. 2016 Jun;27(6):870-84. doi: 10.1177/0956797616640237. Epub 2016 May 3.
This article introduces a generative model of category representation that uses computer vision methods to extract category-consistent features (CCFs) directly from images of category exemplars. The model was trained on 4,800 images of common objects, and CCFs were obtained for 68 categories spanning subordinate, basic, and superordinate levels in a category hierarchy. When participants searched for these same categories, targets cued at the subordinate level were preferentially fixated, but fixated targets were verified faster when they followed a basic-level cue. The subordinate-level advantage in guidance is explained by the number of target-category CCFs, a measure of category specificity that decreases with movement up the category hierarchy. The basic-level advantage in verification is explained by multiplying the number of CCFs by sibling distance, a measure of category distinctiveness. With this model, the visual representations of real-world object categories, each learned from the vast numbers of image exemplars accumulated throughout everyday experience, can finally be studied.
本文介绍了一种类别表征生成模型,该模型使用计算机视觉方法直接从类别示例图像中提取类别一致特征(CCF)。该模型在4800张常见物体图像上进行训练,并针对类别层次结构中从属、基本和上级水平的68个类别获得了CCF。当参与者搜索这些相同类别时,从属水平提示的目标被优先注视,但当目标跟随基本水平提示时,被注视的目标被更快地验证。引导中从属水平优势由目标类别CCF的数量来解释,CCF数量是一种类别特异性度量,它随着在类别层次结构中的上升而减少。验证中基本水平优势通过将CCF数量乘以同级距离来解释,同级距离是一种类别独特性度量。借助该模型,最终可以研究现实世界物体类别的视觉表征,每个类别表征都从日常经验中积累的大量图像示例中学习而来。