• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

论标准和脑约束深度神经网络支持认知叠加的能力:一篇立场文件。

On the ability of standard and brain-constrained deep neural networks to support cognitive superposition: a position paper.

作者信息

Garagnani Max

机构信息

Department of Computing, Goldsmiths - University of London, London, UK.

Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, Berlin, Germany.

出版信息

Cogn Neurodyn. 2024 Dec;18(6):3383-3400. doi: 10.1007/s11571-023-10061-1. Epub 2024 Feb 4.

DOI:10.1007/s11571-023-10061-1
PMID:39712129
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11655761/
Abstract

The ability to coactivate (or "superpose") multiple conceptual representations is a fundamental function that we constantly rely upon; this is crucial in complex cognitive tasks requiring multi-item working memory, such as mental arithmetic, abstract reasoning, and language comprehension. As such, an artificial system aspiring to implement any of these aspects of general intelligence should be able to support this operation. I argue here that standard, feed-forward deep neural networks (DNNs) are unable to implement this function, whereas an alternative, fully brain-constrained class of neural architectures spontaneously exhibits it. On the basis of novel simulations, this proof-of-concept article shows that deep, brain-like networks trained with biologically realistic Hebbian learning mechanisms display the spontaneous emergence of internal circuits (cell assemblies) having features that make them natural candidates for supporting superposition. Building on previous computational modelling results, I also argue that, and offer an explanation as to why, in contrast, modern DNNs trained with gradient descent are generally unable to co-activate their internal representations. While deep brain-constrained neural architectures spontaneously develop the ability to support superposition as a result of (1) neurophysiologically accurate learning and (2) cortically realistic between-area connections, backpropagation-trained DNNs appear to be unsuited to implement this basic cognitive operation, arguably necessary for abstract thinking and general intelligence. The implications of this observation are briefly discussed in the larger context of existing and future artificial intelligence systems and neuro-realistic computational models.

摘要

共同激活(或“叠加”)多个概念表征的能力是我们持续依赖的一项基本功能;这在需要多项目工作记忆的复杂认知任务中至关重要,比如心算、抽象推理和语言理解。因此,一个渴望实现通用智能这些方面中任何一个的人工系统应该能够支持这种操作。我在此论证,标准的前馈深度神经网络(DNN)无法实现此功能,而另一类完全受大脑约束的神经架构会自发地展现出这一功能。基于新颖的模拟,这篇概念验证文章表明,用符合生物学现实的赫布学习机制训练的深度、类脑网络会自发出现具有使其成为支持叠加的自然候选特征的内部回路(细胞集合)。基于之前的计算建模结果,我还论证了,并且解释了为什么,相比之下,用梯度下降训练的现代DNN通常无法共同激活其内部表征。虽然深度受大脑约束的神经架构由于(1)神经生理学上准确的学习和(2)符合皮层现实的区域间连接而自发发展出支持叠加的能力,但反向传播训练的DNN似乎不适合实现这一基本认知操作,而这一操作对于抽象思维和通用智能来说可以说是必要的。在现有和未来人工智能系统以及神经现实计算模型的更大背景下,简要讨论了这一观察结果的影响。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6923/11655761/9b0853fe11ff/11571_2023_10061_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6923/11655761/2f3b8ee331e1/11571_2023_10061_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6923/11655761/283070a65312/11571_2023_10061_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6923/11655761/9b0853fe11ff/11571_2023_10061_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6923/11655761/2f3b8ee331e1/11571_2023_10061_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6923/11655761/283070a65312/11571_2023_10061_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6923/11655761/9b0853fe11ff/11571_2023_10061_Fig3_HTML.jpg

相似文献

1
On the ability of standard and brain-constrained deep neural networks to support cognitive superposition: a position paper.论标准和脑约束深度神经网络支持认知叠加的能力:一篇立场文件。
Cogn Neurodyn. 2024 Dec;18(6):3383-3400. doi: 10.1007/s11571-023-10061-1. Epub 2024 Feb 4.
2
Memristors for Neuromorphic Circuits and Artificial Intelligence Applications.用于神经形态电路和人工智能应用的忆阻器
Materials (Basel). 2020 Feb 20;13(4):938. doi: 10.3390/ma13040938.
3
Causal Influence of Linguistic Learning on Perceptual and Conceptual Processing: A Brain-Constrained Deep Neural Network Study of Proper Names and Category Terms.语言学习对感知和概念处理的因果影响:基于大脑约束的深度学习神经网络对专有名词和类别术语的研究。
J Neurosci. 2024 Feb 28;44(9):e1048232023. doi: 10.1523/JNEUROSCI.1048-23.2023.
4
Symbolic Deep Networks: A Psychologically Inspired Lightweight and Efficient Approach to Deep Learning.符号深度学习网络:一种受心理学启发的轻量级高效深度学习方法。
Top Cogn Sci. 2022 Oct;14(4):702-717. doi: 10.1111/tops.12571. Epub 2021 Oct 5.
5
Deep convolutional neural network and IoT technology for healthcare.用于医疗保健的深度卷积神经网络和物联网技术。
Digit Health. 2024 Jan 17;10:20552076231220123. doi: 10.1177/20552076231220123. eCollection 2024 Jan-Dec.
6
Deep Neural Networks and Visuo-Semantic Models Explain Complementary Components of Human Ventral-Stream Representational Dynamics.深度神经网络和视语义模型解释了人类腹侧流表象动态的互补组成部分。
J Neurosci. 2023 Mar 8;43(10):1731-1741. doi: 10.1523/JNEUROSCI.1424-22.2022. Epub 2023 Feb 9.
7
Influence of language on perception and concept formation in a brain-constrained deep neural network model.语言对脑约束深度神经网络模型中感知和概念形成的影响。
Philos Trans R Soc Lond B Biol Sci. 2023 Feb 13;378(1870):20210373. doi: 10.1098/rstb.2021.0373. Epub 2022 Dec 26.
8
Deep Convolutional Neural Networks Outperform Feature-Based But Not Categorical Models in Explaining Object Similarity Judgments.在解释物体相似性判断方面,深度卷积神经网络的表现优于基于特征的模型,但不优于分类模型。
Front Psychol. 2017 Oct 9;8:1726. doi: 10.3389/fpsyg.2017.01726. eCollection 2017.
9
Recruitment and Consolidation of Cell Assemblies for Words by Way of Hebbian Learning and Competition in a Multi-Layer Neural Network.通过多层神经网络中的赫布学习和竞争对单词的细胞集合进行招募和巩固
Cognit Comput. 2009 Jun;1(2):160-176. doi: 10.1007/s12559-009-9011-1.
10
Placing language in an integrated understanding system: Next steps toward human-level performance in neural language models.将语言置于综合理解系统中:迈向神经语言模型达到人类水平性能的下一步。
Proc Natl Acad Sci U S A. 2020 Oct 20;117(42):25966-25974. doi: 10.1073/pnas.1910416117. Epub 2020 Sep 28.

本文引用的文献

1
Neurobiological mechanisms for language, symbols and concepts: Clues from brain-constrained deep neural networks.神经生物学机制语言、符号和概念:来自受大脑约束的深度神经网络的线索。
Prog Neurobiol. 2023 Nov;230:102511. doi: 10.1016/j.pneurobio.2023.102511. Epub 2023 Jul 22.
2
Grounded Cognition, Linguistic Relativity, and Abstract Concepts.具身认知、语言相对论与抽象概念
Top Cogn Sci. 2023 Oct;15(4):662-667. doi: 10.1111/tops.12663. Epub 2023 May 10.
3
A model of working memory for encoding multiple items and ordered sequences exploiting the theta-gamma code.
一种利用θ-γ编码对多个项目和有序序列进行编码的工作记忆模型。
Cogn Neurodyn. 2023 Apr;17(2):489-521. doi: 10.1007/s11571-022-09836-9. Epub 2022 Jul 16.
4
Influence of language on perception and concept formation in a brain-constrained deep neural network model.语言对脑约束深度神经网络模型中感知和概念形成的影响。
Philos Trans R Soc Lond B Biol Sci. 2023 Feb 13;378(1870):20210373. doi: 10.1098/rstb.2021.0373. Epub 2022 Dec 26.
5
Phase separation of competing memories along the human hippocampal theta rhythm.沿人类海马θ节律的竞争记忆的相分离。
Elife. 2022 Nov 17;11:e80633. doi: 10.7554/eLife.80633.
6
Do sparse brain activity patterns underlie human cognition?稀疏的大脑活动模式是否构成了人类认知?
Neuroimage. 2022 Nov;263:119633. doi: 10.1016/j.neuroimage.2022.119633. Epub 2022 Sep 14.
7
Superposition mechanism as a neural basis for understanding others.叠加机制作为理解他人的神经基础。
Sci Rep. 2022 Feb 21;12(1):2859. doi: 10.1038/s41598-022-06717-3.
8
Orthogonal representations for robust context-dependent task performance in brains and neural networks.大脑和神经网络中鲁棒上下文相关任务性能的正交表示。
Neuron. 2022 Apr 6;110(7):1258-1270.e11. doi: 10.1016/j.neuron.2022.01.005. Epub 2022 Jan 31.
9
Orthogonal neural codes for speech in the infant brain.婴儿大脑中用于言语的正交神经编码。
Proc Natl Acad Sci U S A. 2021 Aug 3;118(31). doi: 10.1073/pnas.2020410118.
10
Biological constraints on neural network models of cognitive function.认知功能神经网络模型的生物学限制
Nat Rev Neurosci. 2021 Aug;22(8):488-502. doi: 10.1038/s41583-021-00473-5. Epub 2021 Jun 28.