Suppr超能文献

抽象表示自然出现在经过多任务训练的神经网络中。

Abstract representations emerge naturally in neural networks trained to perform multiple tasks.

机构信息

Center for Theoretical Neuroscience, Columbia University, New York, NY, USA.

Mortimer B. Zuckerman Mind, Brain and Behavior Institute, Columbia University, New York, NY, USA.

出版信息

Nat Commun. 2023 Feb 23;14(1):1040. doi: 10.1038/s41467-023-36583-0.

Abstract

Humans and other animals demonstrate a remarkable ability to generalize knowledge across distinct contexts and objects during natural behavior. We posit that this ability to generalize arises from a specific representational geometry, that we call abstract and that is referred to as disentangled in machine learning. These abstract representations have been observed in recent neurophysiological studies. However, it is unknown how they emerge. Here, using feedforward neural networks, we demonstrate that the learning of multiple tasks causes abstract representations to emerge, using both supervised and reinforcement learning. We show that these abstract representations enable few-sample learning and reliable generalization on novel tasks. We conclude that abstract representations of sensory and cognitive variables may emerge from the multiple behaviors that animals exhibit in the natural world, and, as a consequence, could be pervasive in high-level brain regions. We also make several specific predictions about which variables will be represented abstractly.

摘要

人类和其他动物在自然行为中表现出一种非凡的能力,可以将知识推广到不同的情境和对象中。我们假设,这种推广能力源自一种特定的表示几何形状,我们称之为抽象,在机器学习中被称为解缠。这些抽象表示已经在最近的神经生理学研究中观察到。然而,它们是如何出现的还不得而知。在这里,我们使用前馈神经网络表明,通过监督学习和强化学习,学习多个任务会导致抽象表示的出现。我们表明,这些抽象表示使在新任务上的少量样本学习和可靠的泛化成为可能。我们得出结论,感觉和认知变量的抽象表示可能源自动物在自然界中表现出的多种行为,并且可能在高级大脑区域中普遍存在。我们还对哪些变量将被抽象地表示做出了一些具体的预测。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e00f/9950464/4fe57217aff2/41467_2023_36583_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验