Harrison William J, Bex Peter J
Department of Psychology, Northeastern University, Boston, MA 02115, USA; Department of Psychology, University of Cambridge, Cambridge CB2 3EB, UK; Queensland Brain Institute, The University of Queensland, Brisbane, QLD 4072, Australia.
Department of Psychology, Northeastern University, Boston, MA 02115, USA.
Curr Biol. 2015 Dec 21;25(24):3213-9. doi: 10.1016/j.cub.2015.10.052. Epub 2015 Nov 25.
Peripheral vision is fundamentally limited not by the visibility of features, but by the spacing between them [1]. When too close together, visual features can become "crowded" and perceptually indistinguishable. Crowding interferes with basic tasks such as letter and face identification and thus informs our understanding of object recognition breakdown in peripheral vision [2]. Multiple proposals have attempted to explain crowding [3], and each is supported by compelling psychophysical and neuroimaging data [4-6] that are incompatible with competing proposals. In general, perceptual failures have variously been attributed to the averaging of nearby visual signals [7-10], confusion between target and distractor elements [11, 12], and a limited resolution of visual spatial attention [13]. Here we introduce a psychophysical paradigm that allows systematic study of crowded perception within the orientation domain, and we present a unifying computational model of crowding phenomena that reconciles conflicting explanations. Our results show that our single measure produces a variety of perceptual errors that are reported across the crowding literature. Critically, a simple model of the responses of populations of orientation-selective visual neurons accurately predicts all perceptual errors. We thus provide a unifying mechanistic explanation for orientation crowding in peripheral vision. Our simple model accounts for several perceptual phenomena produced by crowding of orientation and raises the possibility that multiple classes of object recognition failures in peripheral vision can be accounted for by a single mechanism.
周边视觉从根本上受到限制,不是因为特征的可见性,而是因为它们之间的间距[1]。当视觉特征靠得太近时,它们可能会变得“拥挤”,在感知上难以区分。拥挤会干扰诸如字母和面部识别等基本任务,从而影响我们对周边视觉中物体识别障碍的理解[2]。有多种理论试图解释拥挤现象[3],每种理论都有令人信服的心理物理学和神经影像学数据支持[4 - 6],而这些数据与其他竞争理论不相容。一般来说,感知失败被归因于附近视觉信号的平均化[7 - 10]、目标与干扰元素之间的混淆[11, 12]以及视觉空间注意力的有限分辨率[13]。在这里,我们引入了一种心理物理学范式,允许在方向域内系统地研究拥挤感知,并且我们提出了一个统一的拥挤现象计算模型,该模型调和了相互冲突的解释。我们的结果表明,我们的单一测量方法产生了一系列在拥挤文献中报道过的感知错误。至关重要的是,一个关于方向选择性视觉神经元群体反应的简单模型准确地预测了所有感知错误。因此,我们为周边视觉中的方向拥挤提供了一个统一的机制解释。我们的简单模型解释了由方向拥挤产生的几种感知现象,并提出了一种可能性,即周边视觉中多类物体识别失败可能由单一机制来解释。