• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Transforming task representations to perform novel tasks.将任务表示转化为执行新任务。
Proc Natl Acad Sci U S A. 2020 Dec 29;117(52):32970-32981. doi: 10.1073/pnas.2008852117. Epub 2020 Dec 10.
2
What Is the Model in Model-Based Planning?基于模型规划中的模型是什么?
Cogn Sci. 2021 Jan;45(1):e12928. doi: 10.1111/cogs.12928.
3
Natural language instructions induce compositional generalization in networks of neurons.自然语言指令诱导神经元网络的组合泛化。
Nat Neurosci. 2024 May;27(5):988-999. doi: 10.1038/s41593-024-01607-5. Epub 2024 Mar 18.
4
Domain Specificity of Oculomotor Learning after Changes in Sensory Processing.感觉处理变化后动眼学习的领域特异性
J Neurosci. 2017 Nov 22;37(47):11469-11484. doi: 10.1523/JNEUROSCI.1208-17.2017. Epub 2017 Oct 20.
5
Task representations in neural networks trained to perform many cognitive tasks.神经网络中执行多项认知任务的任务表示。
Nat Neurosci. 2019 Feb;22(2):297-306. doi: 10.1038/s41593-018-0310-2. Epub 2019 Jan 14.
6
PsychRNN: An Accessible and Flexible Python Package for Training Recurrent Neural Network Models on Cognitive Tasks.PsychRNN:一个用于在认知任务上训练递归神经网络模型的易于访问和灵活的 Python 包。
eNeuro. 2021 Jan 15;8(1). doi: 10.1523/ENEURO.0427-20.2020. Print 2021 Jan-Feb.
7
Learning to Forget for Meta-Learning via Task-and-Layer-Wise Attenuation.通过任务和层衰减学习元学习遗忘。
IEEE Trans Pattern Anal Mach Intell. 2022 Nov;44(11):7718-7730. doi: 10.1109/TPAMI.2021.3102098. Epub 2022 Oct 4.
8
Two Computational Approaches to Visual Analogy: Task-Specific Models Versus Domain-General Mapping.两种视觉类比的计算方法:特定任务模型与领域通用映射。
Cogn Sci. 2023 Sep;47(9):e13347. doi: 10.1111/cogs.13347.
9
Revolutionizing Digital Pathology With the Power of Generative Artificial Intelligence and Foundation Models.利用生成式人工智能和基础模型推动数字病理学革命。
Lab Invest. 2023 Nov;103(11):100255. doi: 10.1016/j.labinv.2023.100255. Epub 2023 Sep 26.
10
Reward-predictive representations generalize across tasks in reinforcement learning.在强化学习中,奖励预测表示可以跨任务泛化。
PLoS Comput Biol. 2020 Oct 15;16(10):e1008317. doi: 10.1371/journal.pcbi.1008317. eCollection 2020 Oct.

引用本文的文献

1
Computational modeling approaches to emotional development.情绪发展的计算建模方法。
Dev Psychol. 2025 Apr;61(4):679-690. doi: 10.1037/dev0001830. Epub 2024 Sep 26.
2
Reconciling shared versus context-specific information in a neural network model of latent causes.在潜在因果关系的神经网络模型中协调共享信息和特定上下文信息。
Sci Rep. 2024 Jul 22;14(1):16782. doi: 10.1038/s41598-024-64272-5.

本文引用的文献

1
A distributional code for value in dopamine-based reinforcement learning.多巴胺基强化学习中的价值分布代码。
Nature. 2020 Jan;577(7792):671-675. doi: 10.1038/s41586-019-1924-6. Epub 2020 Jan 15.
2
Grandmaster level in StarCraft II using multi-agent reinforcement learning.星际争霸 II 中的大师级水平使用多智能体强化学习。
Nature. 2019 Nov;575(7782):350-354. doi: 10.1038/s41586-019-1724-z. Epub 2019 Oct 30.
3
Zero-Shot Learning-A Comprehensive Evaluation of the Good, the Bad and the Ugly.零样本学习:好坏丑的全面评估。
IEEE Trans Pattern Anal Mach Intell. 2019 Sep;41(9):2251-2265. doi: 10.1109/TPAMI.2018.2857768. Epub 2018 Jul 19.
4
Building on prior knowledge without building it in.在不建立先前知识的情况下建立它。
Behav Brain Sci. 2017 Jan;40:e268. doi: 10.1017/S0140525X17000176.
5
Building machines that learn and think like people.建造像人一样学习和思考的机器。
Behav Brain Sci. 2017 Jan;40:e253. doi: 10.1017/S0140525X16001837. Epub 2016 Nov 24.
6
Hybrid computing using a neural network with dynamic external memory.使用具有动态外部存储器的神经网络进行混合计算。
Nature. 2016 Oct 27;538(7626):471-476. doi: 10.1038/nature20101. Epub 2016 Oct 12.
7
Mastering the game of Go with deep neural networks and tree search.用深度神经网络和树搜索掌握围棋游戏。
Nature. 2016 Jan 28;529(7587):484-9. doi: 10.1038/nature16961.
8
Interactive activation and mutual constraint satisfaction in perception and cognition.感知与认知中的交互激活与相互约束满足
Cogn Sci. 2014 Aug;38(6):1139-89. doi: 10.1111/cogs.12146. Epub 2014 Aug 7.
9
Turing on Super-Turing and adaptivity.图灵完备性与自适应性的开启。
Prog Biophys Mol Biol. 2013 Sep;113(1):117-26. doi: 10.1016/j.pbiomolbio.2013.03.013. Epub 2013 Apr 10.
10
Letting structure emerge: connectionist and dynamical systems approaches to cognition.让结构涌现:连接主义和动力系统方法对认知的研究。
Trends Cogn Sci. 2010 Aug;14(8):348-56. doi: 10.1016/j.tics.2010.06.002. Epub 2010 Jul 2.

将任务表示转化为执行新任务。

Transforming task representations to perform novel tasks.

机构信息

Department of Psychology, Stanford University, Stanford CA 94305

Department of Psychology, Stanford University, Stanford CA 94305.

出版信息

Proc Natl Acad Sci U S A. 2020 Dec 29;117(52):32970-32981. doi: 10.1073/pnas.2008852117. Epub 2020 Dec 10.

DOI:10.1073/pnas.2008852117
PMID:33303652
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7777120/
Abstract

An important aspect of intelligence is the ability to adapt to a novel task without any direct experience (zero shot), based on its relationship to previous tasks. Humans can exhibit this cognitive flexibility. By contrast, models that achieve superhuman performance in specific tasks often fail to adapt to even slight task alterations. To address this, we propose a general computational framework for adapting to novel tasks based on their relationship to prior tasks. We begin by learning vector representations of tasks. To adapt to new tasks, we propose metamappings, higher-order tasks that transform basic task representations. We demonstrate the effectiveness of this framework across a wide variety of tasks and computational paradigms, ranging from regression to image classification and reinforcement learning. We compare to both human adaptability and language-based approaches to zero-shot learning. Across these domains, metamapping is successful, often achieving 80 to 90% performance, without any data, on a novel task, even when the new task directly contradicts prior experience. We further show that metamapping can not only generalize to new tasks via learned relationships, but can also generalize using novel relationships unseen during training. Finally, using metamapping as a starting point can dramatically accelerate later learning on a new task and reduce learning time and cumulative error substantially. Our results provide insight into a possible computational basis of intelligent adaptability and offer a possible framework for modeling cognitive flexibility and building more flexible artificial intelligence systems.

摘要

智能的一个重要方面是能够在没有任何直接经验(零样本)的情况下,根据与先前任务的关系,适应新任务。人类可以表现出这种认知灵活性。相比之下,在特定任务中表现出超人性能的模型往往无法适应甚至是稍微改变的任务。为了解决这个问题,我们提出了一种基于先前任务与新任务关系的通用适应新任务的计算框架。我们首先学习任务的向量表示。为了适应新任务,我们提出了元映射,即转换基本任务表示的高阶任务。我们在广泛的任务和计算范例中展示了这个框架的有效性,包括回归、图像分类和强化学习。我们将其与人类的适应性和基于语言的零样本学习方法进行了比较。在这些领域中,元映射是成功的,即使在新任务与先前经验直接矛盾的情况下,也可以在没有任何数据的情况下,在新任务上达到 80%到 90%的性能。我们进一步表明,元映射不仅可以通过学习到的关系进行泛化到新任务,还可以使用训练中未见过的新关系进行泛化。最后,使用元映射作为起点可以显著加速新任务上的后续学习,并大大减少学习时间和累积误差。我们的结果为智能适应性的可能计算基础提供了深入了解,并为建模认知灵活性和构建更灵活的人工智能系统提供了一个可能的框架。