• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

神经表象几何是少样本概念学习的基础。

Neural representational geometry underlies few-shot concept learning.

机构信息

Department of Applied Physics, Stanford University, Stanford, CA 94305.

Stanford Institute for Human-Centered Artificial Intelligence, Stanford University, Stanford, CA 94305.

出版信息

Proc Natl Acad Sci U S A. 2022 Oct 25;119(43):e2200800119. doi: 10.1073/pnas.2200800119. Epub 2022 Oct 17.

DOI:10.1073/pnas.2200800119
PMID:36251997
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9618072/
Abstract

Understanding the neural basis of the remarkable human cognitive capacity to learn novel concepts from just one or a few sensory experiences constitutes a fundamental problem. We propose a simple, biologically plausible, mathematically tractable, and computationally powerful neural mechanism for few-shot learning of naturalistic concepts. We posit that the concepts that can be learned from few examples are defined by tightly circumscribed manifolds in the neural firing-rate space of higher-order sensory areas. We further posit that a single plastic downstream readout neuron learns to discriminate new concepts based on few examples using a simple plasticity rule. We demonstrate the computational power of our proposal by showing that it can achieve high few-shot learning accuracy on natural visual concepts using both macaque inferotemporal cortex representations and deep neural network (DNN) models of these representations and can even learn novel visual concepts specified only through linguistic descriptors. Moreover, we develop a mathematical theory of few-shot learning that links neurophysiology to predictions about behavioral outcomes by delineating several fundamental and measurable geometric properties of neural representations that can accurately predict the few-shot learning performance of naturalistic concepts across all our numerical simulations. This theory reveals, for instance, that high-dimensional manifolds enhance the ability to learn new concepts from few examples. Intriguingly, we observe striking mismatches between the geometry of manifolds in the primate visual pathway and in trained DNNs. We discuss testable predictions of our theory for psychophysics and neurophysiological experiments.

摘要

理解人类从仅一或少数几次感官体验中学习新概念的非凡认知能力的神经基础,是一个基本问题。我们提出了一个简单、具有生物学合理性、数学上易于处理且计算能力强大的神经机制,用于自然概念的少次学习。我们假设,能够从少数示例中学习的概念是由高阶感觉区域的神经发放率空间中紧密限制的流形定义的。我们进一步假设,单个可塑性下游读取神经元可以使用简单的可塑性规则,根据少数示例来学习区分新的概念。我们通过展示它可以使用猕猴下颞叶皮层的表示和这些表示的深度神经网络(DNN)模型,在自然视觉概念上实现高少次学习准确性,甚至可以仅通过语言描述符学习新的视觉概念,来证明我们的建议的计算能力。此外,我们开发了一种少次学习的数学理论,通过描绘能够准确预测所有数值模拟中自然概念的少次学习性能的神经表示的几个基本且可测量的几何性质,将神经生理学与行为结果的预测联系起来。例如,该理论揭示了高维流形增强了从少数示例中学习新概念的能力。有趣的是,我们观察到灵长类动物视觉通路和训练有素的 DNN 中的流形的几何形状之间存在惊人的不匹配。我们讨论了我们的理论对心理物理学和神经生理学实验的可测试预测。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f64/9618072/9502f7a5b703/pnas.2200800119fig08.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f64/9618072/4e81db2e9fd8/pnas.2200800119fig01.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f64/9618072/b454f7e9a978/pnas.2200800119fig02.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f64/9618072/e2514a63fdfd/pnas.2200800119fig03.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f64/9618072/796e7213e4ef/pnas.2200800119fig04.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f64/9618072/02755810e39c/pnas.2200800119fig05.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f64/9618072/65c7ee14e214/pnas.2200800119fig06.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f64/9618072/2ac9e10c529f/pnas.2200800119fig07.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f64/9618072/9502f7a5b703/pnas.2200800119fig08.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f64/9618072/4e81db2e9fd8/pnas.2200800119fig01.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f64/9618072/b454f7e9a978/pnas.2200800119fig02.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f64/9618072/e2514a63fdfd/pnas.2200800119fig03.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f64/9618072/796e7213e4ef/pnas.2200800119fig04.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f64/9618072/02755810e39c/pnas.2200800119fig05.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f64/9618072/65c7ee14e214/pnas.2200800119fig06.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f64/9618072/2ac9e10c529f/pnas.2200800119fig07.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f64/9618072/9502f7a5b703/pnas.2200800119fig08.jpg

相似文献

1
Neural representational geometry underlies few-shot concept learning.神经表象几何是少样本概念学习的基础。
Proc Natl Acad Sci U S A. 2022 Oct 25;119(43):e2200800119. doi: 10.1073/pnas.2200800119. Epub 2022 Oct 17.
2
Representations and generalization in artificial and brain neural networks.人工神经网络和大脑神经网络中的表示与泛化。
Proc Natl Acad Sci U S A. 2024 Jul 2;121(27):e2311805121. doi: 10.1073/pnas.2311805121. Epub 2024 Jun 24.
3
Leveraging Prior Concept Learning Improves Generalization From Few Examples in Computational Models of Human Object Recognition.利用先前的概念学习可提高人类物体识别计算模型中少量示例的泛化能力。
Front Comput Neurosci. 2021 Jan 12;14:586671. doi: 10.3389/fncom.2020.586671. eCollection 2020.
4
The combination of Hebbian and predictive plasticity learns invariant object representations in deep sensory networks.赫布和预测性可塑性的结合在深层感觉网络中学习不变的物体表示。
Nat Neurosci. 2023 Nov;26(11):1906-1915. doi: 10.1038/s41593-023-01460-y. Epub 2023 Oct 12.
5
Brain-optimized deep neural network models of human visual areas learn non-hierarchical representations.大脑优化的人类视觉区域深度神经网络模型学习非层次化的表示。
Nat Commun. 2023 Jun 7;14(1):3329. doi: 10.1038/s41467-023-38674-4.
6
Increasingly complex representations of natural movies across the dorsal stream are shared between subjects.在背侧视觉通路中,不同个体对自然电影越来越复杂的表征具有共性。
Neuroimage. 2017 Jan 15;145(Pt B):329-336. doi: 10.1016/j.neuroimage.2015.12.036. Epub 2015 Dec 24.
7
How well do rudimentary plasticity rules predict adult visual object learning?基本的可塑性规则在多大程度上可以预测成人的视觉物体学习?
PLoS Comput Biol. 2023 Dec 11;19(12):e1011713. doi: 10.1371/journal.pcbi.1011713. eCollection 2023 Dec.
8
Unsupervised neural network models of the ventral visual stream.腹侧视觉流的无监督神经网络模型。
Proc Natl Acad Sci U S A. 2021 Jan 19;118(3). doi: 10.1073/pnas.2014196118.
9
Deep Neural Networks for Modeling Visual Perceptual Learning.深度神经网络在视觉感知学习建模中的应用。
J Neurosci. 2018 Jul 4;38(27):6028-6044. doi: 10.1523/JNEUROSCI.1620-17.2018. Epub 2018 May 23.
10
How lateral connections and spiking dynamics may separate multiple objects moving together.侧向连接和尖峰动力学如何分离一起运动的多个物体。
PLoS One. 2013 Aug 2;8(8):e69952. doi: 10.1371/journal.pone.0069952. Print 2013.

引用本文的文献

1
Summary statistics of learning link changing neural representations to behavior.将学习链接变化的神经表征与行为联系起来的总结统计量。
Front Neural Circuits. 2025 Aug 29;19:1618351. doi: 10.3389/fncir.2025.1618351. eCollection 2025.
2
Measuring and Controlling Solution Degeneracy across Task-Trained Recurrent Neural Networks.跨任务训练循环神经网络测量与控制解的退化
ArXiv. 2025 May 28:arXiv:2410.03972v2.
3
The cortical critical power law balances energy and information in an optimal fashion.皮质临界功率定律以最优方式平衡能量和信息。

本文引用的文献

1
A unified theory for the computational and mechanistic origins of grid cells.网格细胞计算与机制起源的统一理论。
Neuron. 2023 Jan 4;111(1):121-137.e13. doi: 10.1016/j.neuron.2022.10.003. Epub 2022 Oct 27.
2
From deep learning to mechanistic understanding in neuroscience: the structure of retinal prediction.从深度学习到神经科学中的机理理解:视网膜预测的结构
Adv Neural Inf Process Syst. 2019 Dec;32:8537-8547.
3
A neural network trained for prediction mimics diverse features of biological neurons and perception.为预测而训练的神经网络模仿生物神经元和感知的各种特征。
Proc Natl Acad Sci U S A. 2025 May 27;122(21):e2418218122. doi: 10.1073/pnas.2418218122. Epub 2025 May 23.
4
Summary statistics of learning link changing neural representations to behavior.将学习链接改变神经表征与行为联系起来的汇总统计数据。
ArXiv. 2025 Jul 14:arXiv:2504.16920v2.
5
Nonlinear classification of neural manifolds with contextual information.具有上下文信息的神经流形的非线性分类。
Phys Rev E. 2025 Mar;111(3-2):035302. doi: 10.1103/PhysRevE.111.035302.
6
Coding schemes in neural networks learning classification tasks.神经网络学习分类任务中的编码方案。
Nat Commun. 2025 Apr 9;16(1):3354. doi: 10.1038/s41467-025-58276-6.
7
Unraveling the Geometry of Visual Relational Reasoning.解析视觉关系推理的几何结构
ArXiv. 2025 Feb 24:arXiv:2502.17382v1.
8
Informational ecosystems partially explain differences in socioenvironmental conceptual associations between U.S. American racial groups.信息生态系统部分解释了美国不同种族群体在社会环境概念联想上的差异。
Commun Psychol. 2025 Jan 20;3(1):5. doi: 10.1038/s44271-025-00186-w.
9
Linking neural population formatting to function.将神经群体形成与功能联系起来。
bioRxiv. 2025 Jan 3:2025.01.03.631242. doi: 10.1101/2025.01.03.631242.
10
EEG spectral attractors identify a geometric core of brain dynamics.脑电图谱吸引子识别出脑动力学的几何核心。
Patterns (N Y). 2024 Jul 19;5(9):101025. doi: 10.1016/j.patter.2024.101025. eCollection 2024 Sep 13.
Nat Mach Intell. 2020 Apr;2(4):210-219. doi: 10.1038/s42256-020-0170-9. Epub 2020 Apr 20.
4
Capturing human categorization of natural images by combining deep networks and cognitive models.通过将深度网络和认知模型相结合来捕捉人类对自然图像的分类。
Nat Commun. 2020 Oct 27;11(1):5418. doi: 10.1038/s41467-020-18946-z.
5
Revealing the multidimensional mental representations of natural objects underlying human similarity judgements.揭示人类相似性判断所基于的自然物体的多维心理表象。
Nat Hum Behav. 2020 Nov;4(11):1173-1185. doi: 10.1038/s41562-020-00951-3. Epub 2020 Oct 12.
6
Universality and individuality in neural dynamics across large populations of recurrent networks.循环神经网络大群体中神经动力学的普遍性与个体性
Adv Neural Inf Process Syst. 2019 Dec;2019:15629-15641.
7
A map of object space in primate inferotemporal cortex.灵长类动物下颞叶皮层的客体空间图谱。
Nature. 2020 Jul;583(7814):103-108. doi: 10.1038/s41586-020-2350-5. Epub 2020 Jun 3.
8
Separability and geometry of object manifolds in deep neural networks.深度神经网络中物体流形的可分离性和几何性质。
Nat Commun. 2020 Feb 6;11(1):746. doi: 10.1038/s41467-020-14578-5.
9
High-dimensional geometry of population responses in visual cortex.群体视觉皮层反应的高维几何结构。
Nature. 2019 Jul;571(7765):361-365. doi: 10.1038/s41586-019-1346-5. Epub 2019 Jun 26.
10
Accurate Estimation of Neural Population Dynamics without Spike Sorting.无需 Spike 排序即可准确估计神经元群体动力学。
Neuron. 2019 Jul 17;103(2):292-308.e4. doi: 10.1016/j.neuron.2019.05.003. Epub 2019 Jun 3.