Suppr超能文献

用于人类和递归网络中幅度泛化的神经状态空间对准。

Neural state space alignment for magnitude generalization in humans and recurrent networks.

机构信息

Department of Experimental Psychology, University of Oxford, Oxford, UK.

Department of Experimental Psychology, University of Oxford, Oxford, UK.

出版信息

Neuron. 2021 Apr 7;109(7):1214-1226.e8. doi: 10.1016/j.neuron.2021.02.004. Epub 2021 Feb 23.

Abstract

A prerequisite for intelligent behavior is to understand how stimuli are related and to generalize this knowledge across contexts. Generalization can be challenging when relational patterns are shared across contexts but exist on different physical scales. Here, we studied neural representations in humans and recurrent neural networks performing a magnitude comparison task, for which it was advantageous to generalize concepts of "more" or "less" between contexts. Using multivariate analysis of human brain signals and of neural network hidden unit activity, we observed that both systems developed parallel neural "number lines" for each context. In both model systems, these number state spaces were aligned in a way that explicitly facilitated generalization of relational concepts (more and less). These findings suggest a previously overlooked role for neural normalization in supporting transfer of a simple form of abstract relational knowledge (magnitude) in humans and machine learning systems.

摘要

智能行为的一个前提是理解刺激是如何相关的,并将这种知识推广到不同的情境中。当关系模式在不同的情境中共享但存在不同的物理尺度时,推广就会具有挑战性。在这里,我们研究了人类和递归神经网络在进行大小比较任务时的神经表示,对于这个任务,在不同的情境之间推广“更多”或“更少”的概念是有利的。使用人类大脑信号和神经网络隐藏单元活动的多元分析,我们观察到两个系统都为每个情境开发了平行的神经“数字线”。在这两个模型系统中,这些数字状态空间以一种明确促进关系概念(更多和更少)推广的方式对齐。这些发现表明,神经归一化在支持人类和机器学习系统中简单形式的抽象关系知识(大小)的转移方面,扮演了一个以前被忽视的角色。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验