Suppr超能文献

逐层复杂度匹配学习产生了一个改进的V2皮质区域模型。

Layerwise complexity-matched learning yields an improved model of cortical area V2.

作者信息

Parthasarathy Nikhil, Hénaff Olivier J, Simoncelli Eero P

机构信息

Center for Neural Science, New York University.

Center for Computational Neuroscience, Flatiron Institute.

出版信息

ArXiv. 2024 Jul 18:arXiv:2312.11436v3.

Abstract

Human ability to recognize complex visual patterns arises through transformations performed by successive areas in the ventral visual cortex. Deep neural networks trained end-to-end for object recognition approach human capabilities, and offer the best descriptions to date of neural responses in the late stages of the hierarchy. But these networks provide a poor account of the early stages, compared to traditional hand-engineered models, or models optimized for coding efficiency or prediction. Moreover, the gradient backpropagation used in end-to-end learning is generally considered to be biologically implausible. Here, we overcome both of these limitations by developing a bottom-up self-supervised training methodology that operates independently on successive layers. Specifically, we maximize feature similarity between pairs of locally-deformed natural image patches, while decorrelating features across patches sampled from other images. Crucially, the deformation amplitudes are adjusted proportionally to receptive field sizes in each layer, thus matching the task complexity to the capacity at each stage of processing. In comparison with architecture-matched versions of previous models, we demonstrate that our layerwise complexity-matched learning (LCL) formulation produces a two-stage model (LCL-V2) that is better aligned with selectivity properties and neural activity in primate area V2. We demonstrate that the complexity-matched learning paradigm is responsible for much of the emergence of the improved biological alignment. Finally, when the two-stage model is used as a fixed front-end for a deep network trained to perform object recognition, the resultant model (LCL-V2Net) is significantly better than standard end-to-end self-supervised, supervised, and adversarially-trained models in terms of generalization to out-of-distribution tasks and alignment with human behavior. Our code and pre-trained checkpoints are available at https://github.com/nikparth/LCL-V2.git.

摘要

人类识别复杂视觉模式的能力是通过腹侧视觉皮层中连续区域执行的变换产生的。经过端到端训练用于物体识别的深度神经网络接近人类能力,并为层级后期的神经反应提供了迄今为止最好的描述。但与传统的手工设计模型或针对编码效率或预测进行优化的模型相比,这些网络对早期阶段的描述很差。此外,端到端学习中使用的梯度反向传播通常被认为在生物学上是不合理的。在这里,我们通过开发一种自下而上的自监督训练方法克服了这两个限制,该方法在连续层上独立运行。具体来说,我们最大化局部变形自然图像块对之间的特征相似性,同时使从其他图像采样的块之间的特征去相关。至关重要的是,变形幅度与每层的感受野大小成比例地调整,从而使任务复杂性与处理每个阶段的能力相匹配。与先前模型的架构匹配版本相比,我们证明我们的分层复杂度匹配学习(LCL)公式产生了一个两阶段模型(LCL-V2),该模型与灵长类动物V2区的选择性特性和神经活动更匹配。我们证明复杂度匹配学习范式是改善生物匹配出现的主要原因。最后,当将两阶段模型用作训练用于执行物体识别的深度网络的固定前端时,所得模型(LCL-V2Net)在对分布外任务的泛化和与人类行为的匹配方面明显优于标准的端到端自监督、监督和对抗训练模型。我们的代码和预训练检查点可在https://github.com/nikparth/LCL-V2.git获得。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0431/11275700/a84a4e034d6a/nihpp-2312.11436v3-f0009.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验