Suppr超能文献

What happens when we relearn part of what we previously knew? Predictions and constraints for models of long-term memory.

作者信息

Atkins P W

机构信息

Australian Graduate School of Management, University of New South Wales, University of Sydney, Sydney, NSW, 2052, Australia.

出版信息

Psychol Res. 2001;65(3):202-15. doi: 10.1007/s004269900015.

Abstract

Part-set relearning studies examine whether relearning a subset of previously learned items impairs or improves memory for other items in memory that are not relearned. Atkins and Murre have examined part-set relearning using multi-layer networks that learn by optimizing performance on a complete set of items. For this paper, four computer models that learn each item additively and separately were tested using the part-set relearning procedure (Hebbian network, CHARM, MINERVA 2, and SAM). Optimization models predict that part-set relearning should improve memory for items not relearned, while additive models make the opposite prediction. This distinction parallels the relative ability of these models to account for interference phenomena. Part-set relearning provides another source of evidence for choosing between optimization and additive models of long-term memory. A new study suggests that the predictions of the additive models are broadly supported.

摘要

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验