Suppr超能文献

神经表象几何是少样本概念学习的基础。

Neural representational geometry underlies few-shot concept learning.

机构信息

Department of Applied Physics, Stanford University, Stanford, CA 94305.

Stanford Institute for Human-Centered Artificial Intelligence, Stanford University, Stanford, CA 94305.

出版信息

Proc Natl Acad Sci U S A. 2022 Oct 25;119(43):e2200800119. doi: 10.1073/pnas.2200800119. Epub 2022 Oct 17.

Abstract

Understanding the neural basis of the remarkable human cognitive capacity to learn novel concepts from just one or a few sensory experiences constitutes a fundamental problem. We propose a simple, biologically plausible, mathematically tractable, and computationally powerful neural mechanism for few-shot learning of naturalistic concepts. We posit that the concepts that can be learned from few examples are defined by tightly circumscribed manifolds in the neural firing-rate space of higher-order sensory areas. We further posit that a single plastic downstream readout neuron learns to discriminate new concepts based on few examples using a simple plasticity rule. We demonstrate the computational power of our proposal by showing that it can achieve high few-shot learning accuracy on natural visual concepts using both macaque inferotemporal cortex representations and deep neural network (DNN) models of these representations and can even learn novel visual concepts specified only through linguistic descriptors. Moreover, we develop a mathematical theory of few-shot learning that links neurophysiology to predictions about behavioral outcomes by delineating several fundamental and measurable geometric properties of neural representations that can accurately predict the few-shot learning performance of naturalistic concepts across all our numerical simulations. This theory reveals, for instance, that high-dimensional manifolds enhance the ability to learn new concepts from few examples. Intriguingly, we observe striking mismatches between the geometry of manifolds in the primate visual pathway and in trained DNNs. We discuss testable predictions of our theory for psychophysics and neurophysiological experiments.

摘要

理解人类从仅一或少数几次感官体验中学习新概念的非凡认知能力的神经基础,是一个基本问题。我们提出了一个简单、具有生物学合理性、数学上易于处理且计算能力强大的神经机制,用于自然概念的少次学习。我们假设,能够从少数示例中学习的概念是由高阶感觉区域的神经发放率空间中紧密限制的流形定义的。我们进一步假设,单个可塑性下游读取神经元可以使用简单的可塑性规则,根据少数示例来学习区分新的概念。我们通过展示它可以使用猕猴下颞叶皮层的表示和这些表示的深度神经网络(DNN)模型,在自然视觉概念上实现高少次学习准确性,甚至可以仅通过语言描述符学习新的视觉概念,来证明我们的建议的计算能力。此外,我们开发了一种少次学习的数学理论,通过描绘能够准确预测所有数值模拟中自然概念的少次学习性能的神经表示的几个基本且可测量的几何性质,将神经生理学与行为结果的预测联系起来。例如,该理论揭示了高维流形增强了从少数示例中学习新概念的能力。有趣的是,我们观察到灵长类动物视觉通路和训练有素的 DNN 中的流形的几何形状之间存在惊人的不匹配。我们讨论了我们的理论对心理物理学和神经生理学实验的可测试预测。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f64/9618072/4e81db2e9fd8/pnas.2200800119fig01.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验