Suppr超能文献

将任务表示转化为执行新任务。

Transforming task representations to perform novel tasks.

机构信息

Department of Psychology, Stanford University, Stanford CA 94305

Department of Psychology, Stanford University, Stanford CA 94305.

出版信息

Proc Natl Acad Sci U S A. 2020 Dec 29;117(52):32970-32981. doi: 10.1073/pnas.2008852117. Epub 2020 Dec 10.

Abstract

An important aspect of intelligence is the ability to adapt to a novel task without any direct experience (zero shot), based on its relationship to previous tasks. Humans can exhibit this cognitive flexibility. By contrast, models that achieve superhuman performance in specific tasks often fail to adapt to even slight task alterations. To address this, we propose a general computational framework for adapting to novel tasks based on their relationship to prior tasks. We begin by learning vector representations of tasks. To adapt to new tasks, we propose metamappings, higher-order tasks that transform basic task representations. We demonstrate the effectiveness of this framework across a wide variety of tasks and computational paradigms, ranging from regression to image classification and reinforcement learning. We compare to both human adaptability and language-based approaches to zero-shot learning. Across these domains, metamapping is successful, often achieving 80 to 90% performance, without any data, on a novel task, even when the new task directly contradicts prior experience. We further show that metamapping can not only generalize to new tasks via learned relationships, but can also generalize using novel relationships unseen during training. Finally, using metamapping as a starting point can dramatically accelerate later learning on a new task and reduce learning time and cumulative error substantially. Our results provide insight into a possible computational basis of intelligent adaptability and offer a possible framework for modeling cognitive flexibility and building more flexible artificial intelligence systems.

摘要

智能的一个重要方面是能够在没有任何直接经验(零样本)的情况下,根据与先前任务的关系,适应新任务。人类可以表现出这种认知灵活性。相比之下,在特定任务中表现出超人性能的模型往往无法适应甚至是稍微改变的任务。为了解决这个问题,我们提出了一种基于先前任务与新任务关系的通用适应新任务的计算框架。我们首先学习任务的向量表示。为了适应新任务,我们提出了元映射,即转换基本任务表示的高阶任务。我们在广泛的任务和计算范例中展示了这个框架的有效性,包括回归、图像分类和强化学习。我们将其与人类的适应性和基于语言的零样本学习方法进行了比较。在这些领域中,元映射是成功的,即使在新任务与先前经验直接矛盾的情况下,也可以在没有任何数据的情况下,在新任务上达到 80%到 90%的性能。我们进一步表明,元映射不仅可以通过学习到的关系进行泛化到新任务,还可以使用训练中未见过的新关系进行泛化。最后,使用元映射作为起点可以显著加速新任务上的后续学习,并大大减少学习时间和累积误差。我们的结果为智能适应性的可能计算基础提供了深入了解,并为建模认知灵活性和构建更灵活的人工智能系统提供了一个可能的框架。

相似文献

1
Transforming task representations to perform novel tasks.将任务表示转化为执行新任务。
Proc Natl Acad Sci U S A. 2020 Dec 29;117(52):32970-32981. doi: 10.1073/pnas.2008852117. Epub 2020 Dec 10.
2
What Is the Model in Model-Based Planning?基于模型规划中的模型是什么?
Cogn Sci. 2021 Jan;45(1):e12928. doi: 10.1111/cogs.12928.
4
Domain Specificity of Oculomotor Learning after Changes in Sensory Processing.感觉处理变化后动眼学习的领域特异性
J Neurosci. 2017 Nov 22;37(47):11469-11484. doi: 10.1523/JNEUROSCI.1208-17.2017. Epub 2017 Oct 20.
7
Learning to Forget for Meta-Learning via Task-and-Layer-Wise Attenuation.通过任务和层衰减学习元学习遗忘。
IEEE Trans Pattern Anal Mach Intell. 2022 Nov;44(11):7718-7730. doi: 10.1109/TPAMI.2021.3102098. Epub 2022 Oct 4.
10
Reward-predictive representations generalize across tasks in reinforcement learning.在强化学习中,奖励预测表示可以跨任务泛化。
PLoS Comput Biol. 2020 Oct 15;16(10):e1008317. doi: 10.1371/journal.pcbi.1008317. eCollection 2020 Oct.

本文引用的文献

1
A distributional code for value in dopamine-based reinforcement learning.多巴胺基强化学习中的价值分布代码。
Nature. 2020 Jan;577(7792):671-675. doi: 10.1038/s41586-019-1924-6. Epub 2020 Jan 15.
3
Zero-Shot Learning-A Comprehensive Evaluation of the Good, the Bad and the Ugly.零样本学习:好坏丑的全面评估。
IEEE Trans Pattern Anal Mach Intell. 2019 Sep;41(9):2251-2265. doi: 10.1109/TPAMI.2018.2857768. Epub 2018 Jul 19.
4
Building on prior knowledge without building it in.在不建立先前知识的情况下建立它。
Behav Brain Sci. 2017 Jan;40:e268. doi: 10.1017/S0140525X17000176.
5
Building machines that learn and think like people.建造像人一样学习和思考的机器。
Behav Brain Sci. 2017 Jan;40:e253. doi: 10.1017/S0140525X16001837. Epub 2016 Nov 24.
9
Turing on Super-Turing and adaptivity.图灵完备性与自适应性的开启。
Prog Biophys Mol Biol. 2013 Sep;113(1):117-26. doi: 10.1016/j.pbiomolbio.2013.03.013. Epub 2013 Apr 10.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验