Department of Psychology and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213.
School of Medicine, University of Pittsburgh, Pittsburgh, PA 15260.
Proc Natl Acad Sci U S A. 2020 May 19;117(20):11167-11177. doi: 10.1073/pnas.1912734117. Epub 2020 May 4.
Irrespective of whether one has substantial perceptual expertise for a class of stimuli, an observer invariably encounters novel exemplars from this class. To understand how novel exemplars are represented, we examined the extent to which previous experience with a category constrains the acquisition and nature of representation of subsequent exemplars from that category. Participants completed a perceptual training paradigm with either novel other-race faces (category of experience) or novel computer-generated objects (YUFOs) that included pairwise similarity ratings at the beginning, middle, and end of training, and a 20-d visual search training task on a subset of category exemplars. Analyses of pairwise similarity ratings revealed multiple dissociations between the representational spaces for those learning faces and those learning YUFOs. First, representational distance changes were more selective for faces than YUFOs; trained faces exhibited greater magnitude in representational distance change relative to untrained faces, whereas this trained-untrained distance change was much smaller for YUFOs. Second, there was a difference in where the representational distance changes were observed; for faces, representations that were closer together before training exhibited a greater distance change relative to those that were farther apart before training. For YUFOs, however, the distance changes occurred more uniformly across representational space. Last, there was a decrease in dimensionality of the representational space after training on YUFOs, but not after training on faces. Together, these findings demonstrate how previous category experience governs representational patterns of exemplar learning as well as the underlying dimensionality of the representational space.
无论一个人对一类刺激物是否具有实质性的感知专业知识,观察者总是会遇到来自该类别的新范例。为了了解新范例是如何表示的,我们研究了先前对某一类别经验的程度如何限制该类别后续范例的获取和表示的性质。参与者使用新的异族面孔(经验类别)或新的计算机生成的对象(YUFO)完成了感知训练范式,其中包括在训练开始、中间和结束时的成对相似性评分,以及在类别范例的子集中进行 20 天的视觉搜索训练任务。对成对相似性评分的分析揭示了学习面孔和学习 YUFO 的表示空间之间的多个分离。首先,对于面孔而言,代表性距离变化比 YUFO 更具选择性;相对于未受过训练的面孔,受过训练的面孔的代表性距离变化幅度更大,而 YUFO 的这种训练-未训练距离变化要小得多。其次,观察到的代表性距离变化的位置存在差异;对于面孔,在训练之前彼此更接近的表示相对于在训练之前彼此更远的表示具有更大的距离变化。然而,对于 YUFO,距离变化在表示空间中更均匀地发生。最后,在对 YUFO 进行训练后,代表空间的维度降低,但在对面孔进行训练后则不会。总之,这些发现表明先前的类别经验如何控制范例学习的表示模式以及表示空间的潜在维度。