Suppr超能文献

阻断训练有助于多种模式的学习。

Blocked training facilitates learning of multiple schemas.

作者信息

Beukers Andre O, Collin Silvy H P, Kempner Ross P, Franklin Nicholas T, Gershman Samuel J, Norman Kenneth A

机构信息

Department of Psychology and Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA.

Tilburg School of Humanities and Digital Sciences, Tilburg University, Tilburg, The Netherlands.

出版信息

Commun Psychol. 2024 Apr 9;2(1):28. doi: 10.1038/s44271-024-00079-4.

Abstract

We all possess a mental library of schemas that specify how different types of events unfold. How are these schemas acquired? A key challenge is that learning a new schema can catastrophically interfere with old knowledge. One solution to this dilemma is to use interleaved training to learn a single representation that accommodates all schemas. However, another class of models posits that catastrophic interference can be avoided by splitting off new representations when large prediction errors occur. A key differentiating prediction is that, according to splitting models, catastrophic interference can be prevented even under blocked training curricula. We conducted a series of semi-naturalistic experiments and simulations with Bayesian and neural network models to compare the predictions made by the "splitting" versus "non-splitting" hypotheses of schema learning. We found better performance in blocked compared to interleaved curricula, and explain these results using a Bayesian model that incorporates representational splitting in response to large prediction errors. In a follow-up experiment, we validated the model prediction that inserting blocked training early in learning leads to better learning performance than inserting blocked training later in learning. Our results suggest that different learning environments (i.e., curricula) play an important role in shaping schema composition.

摘要

我们都拥有一个心理图式库,它规定了不同类型的事件是如何展开的。这些图式是如何获得的呢?一个关键挑战在于,学习一个新图式可能会对旧知识产生灾难性的干扰。解决这一困境的一个办法是使用交错训练来学习一种能容纳所有图式的单一表征。然而,另一类模型认为,当出现大的预测误差时,通过分离出新的表征可以避免灾难性干扰。一个关键的区别性预测是,根据分离模型,即使在分组训练课程下也可以防止灾难性干扰。我们用贝叶斯模型和神经网络模型进行了一系列半自然主义实验和模拟,以比较图式学习的“分离”与“非分离”假设所做出的预测。我们发现与交错课程相比,分组课程的表现更好,并使用一个贝叶斯模型来解释这些结果,该模型结合了针对大预测误差的表征分离。在后续实验中,我们验证了模型预测,即在学习早期插入分组训练比在学习后期插入分组训练能带来更好的学习表现。我们的结果表明,不同的学习环境(即课程)在塑造图式构成方面起着重要作用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/19f4/11332129/7ecb1375a6a8/44271_2024_79_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验