Department of Psychology andPrinceton Neuroscience Institute, Princeton University, Princeton, NJ 08544; and.
Department of Psychology andInstitute for Neuroscience, University of Texas at Austin, Austin, TX 78712.
Proc Natl Acad Sci U S A. 2014 Jun 17;111(24):8997-9002. doi: 10.1073/pnas.1319438111. Epub 2014 Jun 2.
The capacity of long-term memory is thought to be virtually unlimited. However, our memory bank may need to be pruned regularly to ensure that the information most important for behavior can be stored and accessed efficiently. Using functional magnetic resonance imaging of the human brain, we report the discovery of a context-based mechanism for determining which memories to prune. Specifically, when a previously experienced context is reencountered, the brain automatically generates predictions about which items should appear in that context. If an item fails to appear when strongly expected, its representation in memory is weakened, and it is more likely to be forgotten. We find robust support for this mechanism using multivariate pattern classification and pattern similarity analyses. The results are explained by a model in which context-based predictions activate item representations just enough for them to be weakened during a misprediction. These findings reveal an ongoing and adaptive process for pruning unreliable memories.
长期记忆的容量被认为几乎是无限的。然而,我们的记忆库可能需要定期修剪,以确保对行为最重要的信息能够被高效地存储和访问。我们利用人类大脑的功能磁共振成像,报告了一种基于上下文的机制的发现,用于确定要修剪哪些记忆。具体来说,当以前经历过的上下文再次出现时,大脑会自动生成关于哪些项目应该出现在该上下文中的预测。如果一个项目在强烈预期时没有出现,它在记忆中的表示就会减弱,并且更有可能被遗忘。我们使用多元模式分类和模式相似性分析为这个机制找到了强有力的支持。结果可以用一个模型来解释,该模型认为基于上下文的预测会激活项目的表示,以便在预测错误时足以削弱它们。这些发现揭示了一种持续的、自适应的过程,可以修剪不可靠的记忆。