• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

在人类和机器学习中,将抽象与统计模式匹配区分开来。

Disentangling Abstraction from Statistical Pattern Matching in Human and Machine Learning.

机构信息

Neuroscience Institute, Princeton University, Princeton, New Jersey, United States of America.

Google DeepMind, London, United Kingdom.

出版信息

PLoS Comput Biol. 2023 Aug 25;19(8):e1011316. doi: 10.1371/journal.pcbi.1011316. eCollection 2023 Aug.

DOI:10.1371/journal.pcbi.1011316
PMID:37624841
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10497163/
Abstract

The ability to acquire abstract knowledge is a hallmark of human intelligence and is believed by many to be one of the core differences between humans and neural network models. Agents can be endowed with an inductive bias towards abstraction through meta-learning, where they are trained on a distribution of tasks that share some abstract structure that can be learned and applied. However, because neural networks are hard to interpret, it can be difficult to tell whether agents have learned the underlying abstraction, or alternatively statistical patterns that are characteristic of that abstraction. In this work, we compare the performance of humans and agents in a meta-reinforcement learning paradigm in which tasks are generated from abstract rules. We define a novel methodology for building "task metamers" that closely match the statistics of the abstract tasks but use a different underlying generative process, and evaluate performance on both abstract and metamer tasks. We find that humans perform better at abstract tasks than metamer tasks whereas common neural network architectures typically perform worse on the abstract tasks than the matched metamers. This work provides a foundation for characterizing differences between humans and machine learning that can be used in future work towards developing machines with more human-like behavior.

摘要

获取抽象知识的能力是人类智力的标志,许多人认为这是人类和神经网络模型之间的核心区别之一。通过元学习,代理可以被赋予对抽象的归纳偏差,在元学习中,他们在具有一些可以学习和应用的共享抽象结构的任务分布上进行训练。然而,由于神经网络难以解释,因此很难判断代理是否已经学习了底层抽象,或者相反,是否学习了该抽象的特征统计模式。在这项工作中,我们在一个由抽象规则生成任务的元强化学习范例中比较了人类和代理的性能。我们定义了一种新的方法来构建“任务转换器”,它可以很好地匹配抽象任务的统计信息,但使用不同的基础生成过程,并在抽象任务和转换器任务上评估性能。我们发现,人类在抽象任务上的表现优于转换器任务,而常见的神经网络架构通常在抽象任务上的表现不如匹配的转换器。这项工作为描述人类和机器学习之间的差异提供了基础,可用于未来开发更具人类行为特征的机器的工作。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/01ea/10497163/0e928008219f/pcbi.1011316.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/01ea/10497163/27becf452ee0/pcbi.1011316.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/01ea/10497163/065d1d5ca6b0/pcbi.1011316.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/01ea/10497163/72529aaabe53/pcbi.1011316.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/01ea/10497163/0e928008219f/pcbi.1011316.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/01ea/10497163/27becf452ee0/pcbi.1011316.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/01ea/10497163/065d1d5ca6b0/pcbi.1011316.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/01ea/10497163/72529aaabe53/pcbi.1011316.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/01ea/10497163/0e928008219f/pcbi.1011316.g004.jpg

相似文献

1
Disentangling Abstraction from Statistical Pattern Matching in Human and Machine Learning.在人类和机器学习中,将抽象与统计模式匹配区分开来。
PLoS Comput Biol. 2023 Aug 25;19(8):e1011316. doi: 10.1371/journal.pcbi.1011316. eCollection 2023 Aug.
2
Comparing continual task learning in minds and machines.比较心智和机器中的持续任务学习。
Proc Natl Acad Sci U S A. 2018 Oct 30;115(44):E10313-E10322. doi: 10.1073/pnas.1800755115. Epub 2018 Oct 15.
3
How to grow a mind: statistics, structure, and abstraction.如何培养思维:统计、结构与抽象。
Science. 2011 Mar 11;331(6022):1279-85. doi: 10.1126/science.1192788.
4
Neural circuits for learning context-dependent associations of stimuli.学习刺激上下文相关关联的神经回路。
Neural Netw. 2018 Nov;107:48-60. doi: 10.1016/j.neunet.2018.07.018. Epub 2018 Aug 13.
5
Multi-task neural networks by learned contextual inputs.通过学习上下文输入的多任务神经网络。
Neural Netw. 2024 Nov;179:106528. doi: 10.1016/j.neunet.2024.106528. Epub 2024 Jul 9.
6
Emergent mechanisms of evidence integration in recurrent neural networks.递归神经网络中证据整合的涌现机制。
PLoS One. 2018 Oct 16;13(10):e0205676. doi: 10.1371/journal.pone.0205676. eCollection 2018.
7
Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization.利用上下文相关门控和突触稳定缓解灾难性遗忘。
Proc Natl Acad Sci U S A. 2018 Oct 30;115(44):E10467-E10475. doi: 10.1073/pnas.1803839115. Epub 2018 Oct 12.
8
Automatic extraction of cancer registry reportable information from free-text pathology reports using multitask convolutional neural networks.使用多任务卷积神经网络从自由文本病理报告中自动提取癌症登记报告信息。
J Am Med Inform Assoc. 2020 Jan 1;27(1):89-98. doi: 10.1093/jamia/ocz153.
9
Model metamers reveal divergent invariances between biological and artificial neural networks.模型同型揭示了生物神经网络和人工神经网络之间的不同不变性。
Nat Neurosci. 2023 Nov;26(11):2017-2034. doi: 10.1038/s41593-023-01442-0. Epub 2023 Oct 16.
10
How to incorporate biological insights into network models and why it matters.如何将生物学见解纳入网络模型以及为何这很重要。
J Physiol. 2023 Aug;601(15):3037-3053. doi: 10.1113/JP282755. Epub 2022 Sep 25.

引用本文的文献

1
Decomposing dynamical subprocesses for compositional generalization.分解动态子过程以实现组合泛化。
Proc Natl Acad Sci U S A. 2024 Nov 12;121(46):e2408134121. doi: 10.1073/pnas.2408134121. Epub 2024 Nov 8.

本文引用的文献

1
A language of thought for the mental representation of geometric shapes.一种用于几何形状的心理表象的思维语言。
Cogn Psychol. 2022 Dec;139:101527. doi: 10.1016/j.cogpsych.2022.101527. Epub 2022 Nov 17.
2
Rational arbitration between statistics and rules in human sequence processing.人类序列处理中统计与规则之间的理性仲裁。
Nat Hum Behav. 2022 Aug;6(8):1087-1103. doi: 10.1038/s41562-021-01259-6. Epub 2022 May 2.
3
Abstraction and analogy-making in artificial intelligence.人工智能中的抽象和类比推理。
Ann N Y Acad Sci. 2021 Dec;1505(1):79-101. doi: 10.1111/nyas.14619. Epub 2021 Jun 25.
4
Meta-Learning in Neural Networks: A Survey.元学习在神经网络中的研究进展综述
IEEE Trans Pattern Anal Mach Intell. 2022 Sep;44(9):5149-5169. doi: 10.1109/TPAMI.2021.3079209. Epub 2022 Aug 4.
5
The Perception of Relations.关系的感知。
Trends Cogn Sci. 2021 Jun;25(6):475-492. doi: 10.1016/j.tics.2021.01.006. Epub 2021 Mar 31.
6
Structure learning and the posterior parietal cortex.结构学习与后顶叶皮层。
Prog Neurobiol. 2020 Jan;184:101717. doi: 10.1016/j.pneurobio.2019.101717. Epub 2019 Oct 24.
7
What Is a Cognitive Map? Organizing Knowledge for Flexible Behavior.什么是认知图?用于灵活行为的知识组织。
Neuron. 2018 Oct 24;100(2):490-509. doi: 10.1016/j.neuron.2018.10.002.
8
Prefrontal cortex as a meta-reinforcement learning system.前额皮质作为一个元强化学习系统。
Nat Neurosci. 2018 Jun;21(6):860-868. doi: 10.1038/s41593-018-0147-8. Epub 2018 May 14.
9
Building machines that learn and think like people.建造像人一样学习和思考的机器。
Behav Brain Sci. 2017 Jan;40:e253. doi: 10.1017/S0140525X16001837. Epub 2016 Nov 24.
10
Human-level concept learning through probabilistic program induction.通过概率编程归纳实现人类水平的概念学习。
Science. 2015 Dec 11;350(6266):1332-8. doi: 10.1126/science.aab3050.