Institute for Psychology and Centre for Cognitive Science, Technical University of Darmstadt, Darmstadt, Germany.
Department of Computer Science, Rutgers University, Piscataway, New Jersey, United States of America.
PLoS Comput Biol. 2023 Oct 4;19(10):e1011445. doi: 10.1371/journal.pcbi.1011445. eCollection 2023 Oct.
We propose the "runtime learning" hypothesis which states that people quickly learn to perform unfamiliar tasks as the tasks arise by using task-relevant instances of concepts stored in memory during mental training. To make learning rapid, the hypothesis claims that only a few class instances are used, but these instances are especially valuable for training. The paper motivates the hypothesis by describing related ideas from the cognitive science and machine learning literatures. Using computer simulation, we show that deep neural networks (DNNs) can learn effectively from small, curated training sets, and that valuable training items tend to lie toward the centers of data item clusters in an abstract feature space. In a series of three behavioral experiments, we show that people can also learn effectively from small, curated training sets. Critically, we find that participant reaction times and fitted drift rates are best accounted for by the confidences of DNNs trained on small datasets of highly valuable items. We conclude that the runtime learning hypothesis is a novel conjecture about the relationship between learning and memory with the potential for explaining a wide variety of cognitive phenomena.
我们提出了“运行时学习”假说,该假说指出,人们在心理训练过程中会利用记忆中存储的与任务相关的概念实例,快速学习执行不熟悉的任务。为了使学习变得迅速,该假说声称只使用了少数几个类实例,但这些实例对于训练特别有价值。本文通过描述认知科学和机器学习文献中的相关思想来激发假说。我们通过计算机模拟表明,深度神经网络(DNN)可以从小的、精心挑选的训练集中有效地学习,并且有价值的训练项目往往位于抽象特征空间中数据项集群的中心。在一系列三个行为实验中,我们表明人们也可以从小的、精心挑选的训练集中有效地学习。至关重要的是,我们发现参与者的反应时间和拟合漂移率最好由在高度有价值项目的小数据集上训练的 DNN 的置信度来解释。我们得出结论,运行时学习假说对于学习和记忆之间的关系是一个新颖的猜想,具有解释各种认知现象的潜力。