Gatsby Computational Neuroscience Unit, University College London, London W1T 4JG, United Kingdom and
Department of Psychology, University of California-Riverside, Riverside, California 92521.
J Neurosci. 2018 Jul 4;38(27):6028-6044. doi: 10.1523/JNEUROSCI.1620-17.2018. Epub 2018 May 23.
Understanding visual perceptual learning (VPL) has become increasingly more challenging as new phenomena are discovered with novel stimuli and training paradigms. Although existing models aid our knowledge of critical aspects of VPL, the connections shown by these models between behavioral learning and plasticity across different brain areas are typically superficial. Most models explain VPL as readout from simple perceptual representations to decision areas and are not easily adaptable to explain new findings. Here, we show that a well -known instance of deep neural network (DNN), whereas not designed specifically for VPL, provides a computational model of VPL with enough complexity to be studied at many levels of analyses. After learning a Gabor orientation discrimination task, the DNN model reproduced key behavioral results, including increasing specificity with higher task precision, and also suggested that learning precise discriminations could transfer asymmetrically to coarse discriminations when the stimulus conditions varied. Consistent with the behavioral findings, the distribution of plasticity moved toward lower layers when task precision increased and this distribution was also modulated by tasks with different stimulus types. Furthermore, learning in the network units demonstrated close resemblance to extant electrophysiological recordings in monkey visual areas. Altogether, the DNN fulfilled predictions of existing theories regarding specificity and plasticity and reproduced findings of tuning changes in neurons of the primate visual areas. Although the comparisons were mostly qualitative, the DNN provides a new method of studying VPL, can serve as a test bed for theories, and assists in generating predictions for physiological investigations. Visual perceptual learning (VPL) has been found to cause changes at multiple stages of the visual hierarchy. We found that training a deep neural network (DNN) on an orientation discrimination task produced behavioral and physiological patterns similar to those found in human and monkey experiments. Unlike existing VPL models, the DNN was pre-trained on natural images to reach high performance in object recognition, but was not designed specifically for VPL; however, it fulfilled predictions of existing theories regarding specificity and plasticity and reproduced findings of tuning changes in neurons of the primate visual areas. When used with care, this unbiased and deep-hierarchical model can provide new ways of studying VPL from behavior to physiology.
理解视觉感知学习(VPL)变得越来越具有挑战性,因为随着新的刺激和训练范式的出现,新的现象被发现。虽然现有的模型有助于我们了解 VPL 的关键方面,但这些模型显示的行为学习与不同大脑区域之间的可塑性之间的联系通常是肤浅的。大多数模型将 VPL 解释为从简单的感知表示到决策区域的输出,并且不容易适应来解释新的发现。在这里,我们展示了一个众所周知的深度神经网络(DNN)实例,尽管它不是专门为 VPL 设计的,但它提供了一个具有足够复杂性的 VPL 计算模型,可以在多个分析层次上进行研究。在学习了一个 Gabor 方向辨别任务后,DNN 模型再现了关键的行为结果,包括随着任务精度的提高特异性增加,并且还表明,当刺激条件变化时,学习精确的辨别可以不对称地转移到粗略的辨别。与行为发现一致的是,当任务精度增加时,可塑性的分布向较低的层移动,并且这种分布也受到不同刺激类型任务的调制。此外,网络单元中的学习与灵长类动物视觉区域的现有电生理记录非常相似。总的来说,DNN 满足了关于特异性和可塑性的现有理论的预测,并再现了灵长类动物视觉区域神经元调谐变化的发现。尽管这些比较主要是定性的,但 DNN 提供了一种研究 VPL 的新方法,可以作为理论的测试平台,并有助于为生理研究生成预测。已经发现视觉感知学习(VPL)会导致视觉层次结构的多个阶段发生变化。我们发现,在方向辨别任务上训练深度神经网络(DNN)会产生类似于人类和猴子实验中发现的行为和生理模式。与现有的 VPL 模型不同,DNN 是在自然图像上进行预训练的,以达到物体识别的高性能,而不是专门为 VPL 设计的;然而,它满足了关于特异性和可塑性的现有理论的预测,并再现了灵长类动物视觉区域神经元调谐变化的发现。谨慎使用时,这种无偏的深度层次模型可以为从行为到生理学的 VPL 研究提供新方法。