• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人类和神经网络中的神经知识组装。

Neural knowledge assembly in humans and neural networks.

机构信息

Department of Cognitive Science, Occidental College, Los Angeles, CA 90041, USA; Department of Experimental Psychology, University of Oxford, Oxford OX2 6GC, UK.

Department of Experimental Psychology, University of Oxford, Oxford OX2 6GC, UK.

出版信息

Neuron. 2023 May 3;111(9):1504-1516.e9. doi: 10.1016/j.neuron.2023.02.014. Epub 2023 Mar 9.

DOI:10.1016/j.neuron.2023.02.014
PMID:36898375
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10618408/
Abstract

Human understanding of the world can change rapidly when new information comes to light, such as when a plot twist occurs in a work of fiction. This flexible "knowledge assembly" requires few-shot reorganization of neural codes for relations among objects and events. However, existing computational theories are largely silent about how this could occur. Here, participants learned a transitive ordering among novel objects within two distinct contexts before exposure to new knowledge that revealed how they were linked. Blood-oxygen-level-dependent (BOLD) signals in dorsal frontoparietal cortical areas revealed that objects were rapidly and dramatically rearranged on the neural manifold after minimal exposure to linking information. We then adapt online stochastic gradient descent to permit similar rapid knowledge assembly in a neural network model.

摘要

当新信息出现时,例如在虚构作品中出现情节转折时,人类对世界的理解会迅速改变。这种灵活的“知识组合”需要对物体和事件之间的关系进行少量的神经编码重新组织。然而,现有的计算理论在很大程度上没有说明这是如何发生的。在这里,参与者在接触到新的知识之前,在两个不同的情境中学习了新颖物体之间的传递关系,这些知识揭示了它们是如何联系在一起的。背侧额顶皮质区域的血氧水平依赖(BOLD)信号显示,在接触到关联信息后,物体在神经流形上迅速而显著地重新排列。然后,我们采用在线随机梯度下降来允许神经网络模型中类似的快速知识组合。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e674/10618408/d600070dd22b/gr6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e674/10618408/190c01c87265/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e674/10618408/740e8c60d1dd/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e674/10618408/4d5d2afdf6d2/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e674/10618408/b118d9647de0/gr4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e674/10618408/1eda3ce3e6ed/gr5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e674/10618408/d600070dd22b/gr6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e674/10618408/190c01c87265/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e674/10618408/740e8c60d1dd/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e674/10618408/4d5d2afdf6d2/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e674/10618408/b118d9647de0/gr4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e674/10618408/1eda3ce3e6ed/gr5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e674/10618408/d600070dd22b/gr6.jpg

相似文献

1
Neural knowledge assembly in humans and neural networks.人类和神经网络中的神经知识组装。
Neuron. 2023 May 3;111(9):1504-1516.e9. doi: 10.1016/j.neuron.2023.02.014. Epub 2023 Mar 9.
2
Observing Action Sequences Elicits Sequence-Specific Neural Representations in Frontoparietal Brain Regions.观察动作序列会在前顶叶脑区引发序列特异性的神经表象。
J Neurosci. 2018 Nov 21;38(47):10114-10128. doi: 10.1523/JNEUROSCI.1597-18.2018. Epub 2018 Oct 3.
3
Representations and generalization in artificial and brain neural networks.人工神经网络和大脑神经网络中的表示与泛化。
Proc Natl Acad Sci U S A. 2024 Jul 2;121(27):e2311805121. doi: 10.1073/pnas.2311805121. Epub 2024 Jun 24.
4
Stability analysis of stochastic gradient descent for homogeneous neural networks and linear classifiers.随机梯度下降在同质神经网络和线性分类器中的稳定性分析。
Neural Netw. 2023 Jul;164:382-394. doi: 10.1016/j.neunet.2023.04.028. Epub 2023 Apr 25.
5
A Gradient of Sharpening Effects by Perceptual Prior across the Human Cortical Hierarchy.人类皮层层次上的知觉先验对锐化效应的梯度。
J Neurosci. 2021 Jan 6;41(1):167-178. doi: 10.1523/JNEUROSCI.2023-20.2020. Epub 2020 Nov 18.
6
Subspace distillation for continual learning.用于持续学习的子空间蒸馏
Neural Netw. 2023 Oct;167:65-79. doi: 10.1016/j.neunet.2023.07.047. Epub 2023 Aug 6.
7
Representations of common event structure in medial temporal lobe and frontoparietal cortex support efficient inference.内侧颞叶和顶枕叶皮层中常见事件结构的表示支持有效推断。
Proc Natl Acad Sci U S A. 2020 Nov 24;117(47):29338-29345. doi: 10.1073/pnas.1912338117.
8
Is Learning in Biological Neural Networks Based on Stochastic Gradient Descent? An Analysis Using Stochastic Processes.基于随机梯度下降的生物神经网络学习?随机过程分析。
Neural Comput. 2024 Jun 7;36(7):1424-1432. doi: 10.1162/neco_a_01668.
9
Few-Shot Learning in Spiking Neural Networks by Multi-Timescale Optimization.基于多时间尺度优化的尖峰神经网络少样本学习。
Neural Comput. 2021 Aug 19;33(9):2439-2472. doi: 10.1162/neco_a_01423.
10
Efficient neural codes naturally emerge through gradient descent learning.有效的神经编码自然通过梯度下降学习产生。
Nat Commun. 2022 Dec 29;13(1):7972. doi: 10.1038/s41467-022-35659-7.

引用本文的文献

1
Distinct neural representational geometries of numerosity in early visual and association regions across visual streams.跨视觉流的早期视觉区域和联合区域中数字的不同神经表征几何结构。
Commun Biol. 2025 Jul 9;8(1):1029. doi: 10.1038/s42003-025-08395-z.
2
Elucidating the selection mechanisms in context-dependent computation through low-rank neural network modeling.通过低秩神经网络建模阐明上下文相关计算中的选择机制。
Elife. 2025 Jul 3;13:RP103636. doi: 10.7554/eLife.103636.
3
Humans learn generalizable representations through efficient coding.

本文引用的文献

1
Orthogonal representations for robust context-dependent task performance in brains and neural networks.大脑和神经网络中用于稳健上下文相关任务表现的正交表示。
Neuron. 2022 Dec 21;110(24):4212-4219. doi: 10.1016/j.neuron.2022.12.004.
2
Orthogonal representations for robust context-dependent task performance in brains and neural networks.大脑和神经网络中鲁棒上下文相关任务性能的正交表示。
Neuron. 2022 Apr 6;110(7):1258-1270.e11. doi: 10.1016/j.neuron.2022.01.005. Epub 2022 Jan 31.
3
Learning and Representation of Hierarchical Concepts in Hippocampus and Prefrontal Cortex.
人类通过高效编码学习可泛化的表征。
Nat Commun. 2025 Apr 29;16(1):3989. doi: 10.1038/s41467-025-58848-6.
4
Unraveling the Geometry of Visual Relational Reasoning.解析视觉关系推理的几何结构
ArXiv. 2025 Feb 24:arXiv:2502.17382v1.
5
A Rapid Cortical Learning Process Supporting Students' Knowledge Construction During Real Classroom Teaching.一种支持学生在真实课堂教学中进行知识建构的快速皮层学习过程。
Adv Sci (Weinh). 2025 May;12(18):e2416610. doi: 10.1002/advs.202416610. Epub 2025 Feb 7.
6
Neural mechanisms of relational learning and fast knowledge reassembly in plastic neural networks.可塑性神经网络中关系学习和快速知识重组的神经机制。
Nat Neurosci. 2025 Feb;28(2):406-414. doi: 10.1038/s41593-024-01852-8. Epub 2025 Jan 15.
7
Two-dimensional neural geometry underpins hierarchical organization of sequence in human working memory.二维神经几何学支撑着人类工作记忆中序列的层次组织。
Nat Hum Behav. 2025 Feb;9(2):360-375. doi: 10.1038/s41562-024-02047-8. Epub 2024 Nov 7.
8
Transitive inference as probabilistic preference learning.作为概率偏好学习的传递性推理
Psychon Bull Rev. 2025 Apr;32(2):674-689. doi: 10.3758/s13423-024-02600-6. Epub 2024 Oct 22.
9
A geometrical solution underlies general neural principle for serial ordering.一种几何解为序列排序的一般神经原则提供了基础。
Nat Commun. 2024 Sep 19;15(1):8238. doi: 10.1038/s41467-024-52240-6.
10
Identifying Transfer Learning in the Reshaping of Inductive Biases.在归纳偏差重塑中识别迁移学习。
Open Mind (Camb). 2024 Sep 15;8:1107-1128. doi: 10.1162/opmi_a_00158. eCollection 2024.
海马体和前额叶皮层中层次概念的学习和表示。
J Neurosci. 2021 Sep 8;41(36):7675-7686. doi: 10.1523/JNEUROSCI.0657-21.2021. Epub 2021 Jul 30.
4
Impaired neural replay of inferred relationships in schizophrenia.精神分裂症中推断关系的神经回放受损。
Cell. 2021 Aug 5;184(16):4315-4328.e17. doi: 10.1016/j.cell.2021.06.012. Epub 2021 Jun 30.
5
Representational geometry of perceptual decisions in the monkey parietal cortex.猴子顶叶皮层中知觉决策的表象几何
Cell. 2021 Jul 8;184(14):3748-3761.e18. doi: 10.1016/j.cell.2021.05.022. Epub 2021 Jun 24.
6
Formalizing planning and information search in naturalistic decision-making.自然决策中的规划与信息搜索形式化。
Nat Neurosci. 2021 Aug;24(8):1051-1064. doi: 10.1038/s41593-021-00866-w. Epub 2021 Jun 21.
7
Concept formation as a computational cognitive process.概念形成作为一种计算认知过程。
Curr Opin Behav Sci. 2021 Apr;38:83-89. doi: 10.1016/j.cobeha.2020.12.005. Epub 2021 Jan 8.
8
Neural state space alignment for magnitude generalization in humans and recurrent networks.用于人类和递归网络中幅度泛化的神经状态空间对准。
Neuron. 2021 Apr 7;109(7):1214-1226.e8. doi: 10.1016/j.neuron.2021.02.004. Epub 2021 Feb 23.
9
How humans learn and represent networks.人类如何学习和表示网络。
Proc Natl Acad Sci U S A. 2020 Nov 24;117(47):29407-29415. doi: 10.1073/pnas.1912328117.
10
If deep learning is the answer, what is the question?如果深度学习是答案,那么问题是什么?
Nat Rev Neurosci. 2021 Jan;22(1):55-67. doi: 10.1038/s41583-020-00395-8. Epub 2020 Nov 16.